I’m Professor Phillip (Phill) Dawson, and I research assessment, feedback,  cheating and AI in higher education.

I welcome contact about research collaboration, PhD supervision, consulting, media requests, and speaking invitations

Email me

Recordings and slides from my recent talks

Don’t fear the robot: Future-authentic assessment and generative artificial intelligence

Students are going to use ChatGPT whether we like it or not. How can we still assess them?

Download the slides

Feedback and feedback literacy

Feedback can be one of the most powerful learning processes – but only if students engage. How can we improve feedback in ways that work for students and educators?

Download the slides

How can we compare what works in stopping cheating?

Rather than slides, this is a link to the Canva file you can edit if you want to make your own tier list of approaches to address cheating.

Edit on Canva
My latest book

Defending Assessment Security in a Digital World

Defending Assessment Security in a Digital World explores the phenomenon of e-cheating and identifies ways to bolster assessment to ensure that it is secured against threats posed by technology.

Taking a multi-disciplinary approach, the book develops the concept of assessment security through research from cybersecurity, game studies, artificial intelligence and surveillance studies. Throughout, there is a rigorous examination of the ways people cheat in different contexts, and the effectiveness of different approaches at stopping cheating. This evidence informs the development of standards and metrics for assessment security, and ways that assessment design can help address e-cheating. Its new concept of assessment security both complements and challenges traditional notions of academic integrity.

By focusing on proactive, principles-based approaches, the book equips educators, technologists and policymakers to address both current e-cheating as well as future threats.

Read a sample on Google books

Learning and teaching resources

Here are some of the materials I’ve made or contributed to that translate my research into practical guidance

Assessment and Generative Artificial Intelligence

This ‘CRADLE Suggests’ guide discusses strategies assessors can use in the age of generative artificial intelligence.


View the guide

Strategies for using remote proctored exams

There’s a lot of debate around remote proctored exams. This guide has some practical suggestions on how to make the most of them while minimising their potential harms.


Download the strategies

Conditions that enable effective feedback

Effective feedback requires more than just know-how. This resource identifies the conditions that help feedback work, from a study of more than 5,000 students.


View the conditions

The prevention of contract cheating in an online environment

When students complete assessment on their own, how can we be sure they have done the work themselves? This guide provides advice to educators on preventing ‘contract cheating’ which is the outsourcing of assessed work


Download the guide

Ensuring academic integrity and
assessment security with redesigned online delivery

This resource provides guidance to educators about how to shift their assessment online during the COVID-19 pandemic.


Download the resource

Assessment Design Decisions Framework

This resource connects research with the everyday realities of educators to provide guidance on how to improve assessment.


View the framework

Free research papers

Can training improve marker accuracy at detecting contract cheating? A multi-disciplinary pre-post study

Contract cheating occurs when students outsource assessed work. In this study, we asked experienced markers from four disciplines to detect contract cheating in a set of 20 discipline-specific assignments. We then conducted a training workshop to improve their detection accuracy, and afterwards asked them to detect contract cheating in 20 new assignments. We analysed the data in terms of sensitivity (the rate at which markers spotted contract cheating) and specificity (the rate at which markers spotted real student work). Pre-workshop marker sensitivity was 58% and specificity was 83%. Post-workshop marker sensitivity was 82% and specificity was 87%. The increase in sensitivity was statistically significant, but the increase in specificity was not. These results indicate that markers can often detect contract cheating when asked to do so, and that training may be helpful in improving their accuracy. We suggest that markers’ suspicions may be crucial in addressing contract cheating.

Download

Authentic feedback: supporting learners to engage in disciplinary feedback practices

If feedback is to be conducted effectively, then there needs to be clarity about what is involved and what is necessary for teachers to be able to undertake it well. While much attention has recently been devoted to student feedback literacy, less has been given to what is required of teaching staff in their various roles in feedback processes. This paper seeks to elucidate teacher feedback literacy through an analysis of the accounts of those who do feedback well. An inductive analysis was undertaken of conversations about feedback with 62 university teachers from five Australian universities using a dataset of transcripts of interviews and focus groups from two earlier research studies. Through an iterative process a teacher feedback literacy competency framework was developed which represents the competencies required of university teachers able to design and enact effective feedback processes. The paper discusses the different competencies required of those with different levels of responsibility, from overall course design to commenting on students’ work. It concludes by considering implications for the professional development of university teachers in the area of feedback.

Download

Assessment for inclusion: rethinking contemporary strategies in assessment design

Assessment has multiple purposes, one of which is to judge if students have met outcomes at the requisite level. Underperformance in assessment is frequently positioned as a problem of the student and attributed to student diversity and/or background characteristics. However, the assessment might also be inequitable and therefore exclude students inappropriately. To be inclusive, assessment design needs to be reconsidered, and educators should look beyond simplistic categories of disability or social equity groups, towards considering and accounting for diversity on many spectra. This article introduces the concept of assessment for inclusion, which seeks to ensure diverse students are not disadvantaged through assessment practices. Assumptions in assessment design are problematised from this point of view, and three central concerns relating to assessment traditions, assessment expectations, and academic integrity are interrogated. Contemporary design strategies of authentic assessment, programmatic assessment, and assessment for distinctiveness are then harnessed to illustrate approaches to assessment for inclusion. Assessment for inclusion therefore builds on the synergies between inclusive practice and good assessment design.

Download

What makes for effective feedback: staff and student perspectives

Since the early 2010s the literature has shifted to view feedback as a process that students do where they make sense of information about work they have done, and use it to improve the quality of their subsequent work. In this view, effective feedback needs to demonstrate effects. However, it is unclear if educators and students share this understanding of feedback. This paper reports a qualitative investigation of what educators and students think the purpose of feedback is, and what they think makes feedback effective. We administered a survey on feedback that was completed by 406 staff and 4514 students from two Australian universities. Inductive thematic analysis was conducted on data from a sample of 323 staff with assessment responsibilities and 400 students. Staff and students largely thought the purpose of feedback was improvement. With respect to what makes feedback effective, staff mostly discussed feedback design matters like timing, modalities and connected tasks. In contrast, students mostly wrote that high-quality feedback comments make feedback effective – especially comments that are usable, detailed, considerate of affect and personalised to the student’s own work. This study may assist researchers, educators and academic developers in refocusing their efforts in improving feedback.

Download

Assessment rubrics: towards clearer and more replicable design, research and practice

‘Rubric’ is a term with a variety of meanings. As the use of rubrics has increased both in research and practice, the term has come to represent divergent practices. These range from secret scoring sheets held by teachers to holistic student-developed articulations of quality. Rubrics are evaluated, mandated, embraced and resisted based on often imprecise and inconsistent understandings of the term. This paper provides a synthesis of the diversity of rubrics, and a framework for researchers and practitioners to be clearer about what they mean when they say ‘rubric’. Fourteen design elements or decision points are identified that make one rubric different from another. This framework subsumes previous attempts to categorise rubrics, and should provide more precision to rubric discussions and debate, as well as supporting more replicable research and practice.

Download

What feedback literate teachers do: an empirically-derived competency framework

If feedback is to be conducted effectively, then there needs to be clarity about what is involved and what is necessary for teachers to be able to undertake it well. While much attention has recently been devoted to student feedback literacy, less has been given to what is required of teaching staff in their various roles in feedback processes. This paper seeks to elucidate teacher feedback literacy through an analysis of the accounts of those who do feedback well. An inductive analysis was undertaken of conversations about feedback with 62 university teachers from five Australian universities using a dataset of transcripts of interviews and focus groups from two earlier research studies. Through an iterative process a teacher feedback literacy competency framework was developed which represents the competencies required of university teachers able to design and enact effective feedback processes. The paper discusses the different competencies required of those with different levels of responsibility, from overall course design to commenting on students’ work. It concludes by considering implications for the professional development of university teachers in the area of feedback.

Download

See more of my research on Google Scholar

Comedy!

I’ve been doing improvised comedy for five years, and I currently produce and perform in The Peer Revue, where we find the funny in academic research. Each month we feature a new guest academic who tells stories from their research, which a team of talented improvisers (including me!) use to create comedy.

Tickets and more information