Assessment

Centre for Research in Assessment and Digital Learning

I am Associate Professor and Associate Director of the Centre for Research on Assessment and Digital Learning (CRADLE), Deakin University. The centre Director is Professor David Boud.

Assessment Design Decisions

I co-led the federally-funded $220k Office for Learning and Teaching project “Improving Assessment: Understanding Educational Decision-Making in Practice.” We prepared three key resources for the sector:

  1. The Assessment Design Decisions Framework identifies the key considerations university teachers face when designing assessment
  2. The Guide to the Assessment Design Decisions Framework has advice, resources and examples from educators for each consideration in the Framework
  3. Five Insights for Improving University Assessment Practices identifies key steps university learning and teaching leaders can take to support improvements to assessment.
1

A key paper detailing how and why we made those resources is available:

Support for assessment practice: developing the Assessment Design Decisions Framework Teaching in Higher Education, 2016. Bearman, M., Dawson, P., Boud, D., Bennett, S, Hall, M. & Molloy, E.

There are many excellent publications outlining features of assessment and feedback design in higher education. However, university educators often find these ideas challenging to realise in practice, as much of the literature focuses on institutional change rather than supporting academics. This paper describes the conceptual development of a practical framework designed to stimulate educators’ thinking when creating or modifying assessments. We explain the concepts that underpin this practical support, including the notions of ‘assessment decisions’ and ‘assessment design phases’, as informed by relevant literature and empirical data. We also present the outcome of this work. The Assessment Design Decisions Framework. This provides key considerations in six categories: purposes, contexts, tasks, interactions, feedback processes and learning outcomes. By tracing the development of the Framework, we highlight complex ways of thinking about assessment that are relevant to those who design and deliver assessment to tertiary students.

We also had a specific focus on the influence of technology on teachers’ design of assessments, which is presented in this paper:

How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 2016. Bennett, S., Dawson, P., Bearman, M., Molloy, E. & Boud, D.

A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing technology-supported assessment strategies. This paper reports on the experience of designing technology-supported assessment based on interviews with 33 Australian university teachers. The findings reveal the desire to achieve greater efficiencies and to be contemporary and innovative as key drivers of technology adoption for assessment. Participants sought to shape student behaviors through their designs and made adaptations in response to positive feedback and undesirable outcomes. Many designs required modification because of a lack of appropriate support, leading to compromise and, in some cases, abandonment. These findings highlight the challenges to effective technology-supported assessment design and demonstrate the difficulties university teachers face when attempting to negotiate mixed messages within institutions and the demands of design work. We use these findings to suggest opportunities to improve support by offering pedagogical guidance and technical help at critical stages of the design process and encouraging an iterative approach to design.

In this project we also explored different theoretical approaches to understanding assessment, such as practice theories:

Reframing assessment research: through a practice perspective. Studies in Higher Education, 2016. Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G. & Molloy, E.

Assessment as a field of investigation has been influenced by a limited number of perspectives. These have focused assessment research in particular ways that have emphasised measurement, or student learning or institutional policies. The aim of this paper is to view the phenomenon of assessment from a practice perspective drawing upon ideas from practice theory. Such a view places assessment practices as central. This perspective is illustrated using data from an empirical study of assessment decision-making and uses as an exemplar the identified practice of ‘bringing a new assessment task into being’. It is suggested that a practice perspective can position assessment as integral to curriculum practices and end separations of assessment from teaching and learning. It enables research on assessment to de-centre measurement and take account of the wider range of people, phenomena and things that constitute it.

Assessment rubrics

Assessment rubrics: towards clearer and more replicable design, research and practice Assessment & Evaluation in Higher Education, 2015. Dawson, P. Download PDF pre-print

‘Rubric’ is a term with a variety of meanings. As the use of rubrics has increased both in research and practice, the term has come to represent divergent practices. These range from secret scoring sheets held by teachers to holistic student-developed articulations of quality. Rubrics are evaluated, mandated, embraced and resisted based on often imprecise and inconsistent understandings of the term. This paper provides a synthesis of the diversity of rubrics, and a framework for researchers and practitioners to be clearer about what they mean when they say ‘rubric’. Fourteen design elements or decision points are identified that make one rubric different from another. This framework subsumes previous attempts to categorise rubrics, and should provide more precision to rubric discussions and debate, as well as supporting more replicable research and practice.

Electronic exam hacking

Five ways to hack and cheat with bring-your-own-device electronic examinations British Journal of Educational Technology 2015. Dawson, P.

Bring-your-own-device electronic examinations (BYOD e-exams) are a relatively new type of assessment where students sit an in-person exam under invigilated conditions with their own laptop. Special software restricts student access to prohibited computer functions and files, and provides access to any resources or software the examiner approves. In this study, the decades-old computer security principle that ‘software security depends on hardware security’ is applied to a range of BYOD e-exam tools. Five potential hacks are examined, four of which are confirmed to work against at least one BYOD e-exam tool. The consequences of these hacks are significant, ranging from removal of the exam paper from the venue through to receiving live assistance from an outside expert. Potential mitigation strategies are proposed; however, these are unlikely to completely protect the integrity of BYOD e-exams. Educational institutions are urged to balance the additional affordances of BYOD e-exams for examiners against the potential affordances for cheaters.

A collection of assessment resources, papers and tools

I keep a set of assessment resources, papers and tools at the ready. It’s a curated set, so only includes things I think are useful and sound. Download a copy, and feel free to let me know if you think I should add something.

How do university teachers design assessment?

Assessment Might Dictate the Curriculum, but What Dictates Assessment? Teaching and Learning Inquiry: The ISSOTL Journal, 2013. Dawson, P., Bearman, M., Boud, D., Hall, M., Molloy, E., Bennett, S. & Joughin, G.

Almost all tertiary educators make assessment choices, for example, when they create an assessment task, design a rubric, or write multiple-choice items. Educators potentially have access to a variety of evidence and materials regarding good assessment practice but may not choose to consult them or be successful in translating these into practice. In this article, we propose a new challenge for the Scholarship of Teaching and Learning: the need to study the disjunction between proposals for assessment “best practice” and assessment in practice by examining the assessment decision-making of teachers. We suggest that assessment decision-making involves almost all university teachers, occurs at multiple levels, and is influenced by expertise, trust, culture, and policy. Assessment may dictate the curriculum from the student’s perspective, and we argue that assessment decision-making dictates assessment.
This paper came out of the federally-funded $220k Office for Learning and Teaching project “Improving Assessment: Understanding Educational Decision-Making in Practice.”

Assessment’s tortured (linguistic) history

A contribution to the history of assessment: how a conversation simulator redeems Socratic method. Assessment & Evaluation in Higher Education 2014. Nelson, R. & Dawson, P. Download PDF pre-print

Assessment in education is a recent phenomenon. Although there were counterparts in former epochs, the term assessment only began to be spoken about in education after the Second World War; and, since that time, views, strategies and concerns over assessment have proliferated according to an uncomfortable dynamic. We fear that, increasingly, education is assessment-led rather than learning-led and ‘counter to what is desired’ in an ugly judgemental spirit whose moral underpinnings deserve scrutiny. In this article, we seek to historicise assessment and the anxieties of credentialising students. Through this longer history, we present a philosophy of assessment which underlies the development of a new method in assessment-as-learning. We hope that our development of a conversation simulator helps restore the innocence of education as learning-led, while still delivering on the incumbencies of assessment.

Competition and assessment

Competition, education and assessment: connecting history with recent scholarship. Assessment & Evaluation in Higher Education 2015. Nelson, R. & Dawson, P. Download PDF pre-print

In this article, we investigate competition in education, asking if it is good or bad, and especially if it is old and necessary or new and questionable. Using philological methods, we trace the history of competition and relate it to contemporary educational ideas. In history and modern pedagogical research, competition has a ‘dark side’ as well as energising qualities. We question the inseparability of competition and education, and, weighing up the moral and pedagogical benefits and dangers, we advocate moderation in educational competition.

Leave a Reply