Reporting bias: why researchers don’t publish when innovative teaching approaches don’t work

studies-coverThere’s a lot of research about ‘what works’ in education. But what about ‘what doesn’t work’? In a recently published paper in Studies in Higher Education, we investigate a phenomenon called reporting bias, which the Cochrane Collaboration’s handbook defines as:

Reporting biases arise when the dissemination of research findings is influenced by the nature and direction of results. Statistically significant, ‘positive’ results that indicate that an intervention works are more likely to be published, more likely to be published rapidly, more likely to be published in English, more likely to be published more than once, more likely to be published in high impact journals and, related to the last point, more likely to be cited by others.

We analyse the research literature through a number of approaches and find strong evidence for reporting bias in learning and teaching. In our field, researchers usually don’t publish when a new teaching approach doesn’t give the results we expect… we chalk it up to experience or discuss with close colleagues instead. In our paper, we discuss the problems this causes, and some potential solutions:

Sharing successes and hiding failures: ‘reporting bias’ in learning and teaching research. Studies in Higher Education, 2016. Dawson, P., & Dawson, S.

When researchers selectively report significant positive results, and omit non-significant or negative results, the published literature skews in a particular direction. This is called ‘reporting bias’, and it can cause both casual readers and meta-analysts to develop an inaccurate understanding of the efficacy of an intervention. This paper identifies potential reporting bias in a recent high-profile higher education meta-analysis. It then examines a range of potential factors that may make higher education learning and teaching research particularly susceptible to reporting bias. These include the fuzzy boundaries between learning and teaching research, scholarship and teaching; the positive agendas of ‘learning and teaching’ funding bodies; methodological issues; and para-academic researchers in roles without tenure or academic freedom. Recommendations are provided for how researchers, journals, funders, ethics committees and universities can reduce reporting bias.

We hope researchers find this a useful paper to cite when publishing negative or non-significant results in learning and teaching. Feel free to email to discuss, or to request a copy of the paper.

Some cognitive reasons why assessment improvement is hard – and what we can do about them

Improving assessment can be hard for a mix of pragmatic and pedagogical reasons. In new research just published, we extend those to include some psychological limits to assessment designers’ thinking and decision-making. You could think of this as the (abridged) Thinking Fast and Slow or Freakonomics of assessment design:

Improving assessment tasks through addressing our unconscious limits to change. Assessment & Evaluation in Higher Education, 2016. Joughin, G., Dawson, P., & Boud, D.

Despite widespread recognition of the need to improve assessment in higher education, assessment tasks in individual courses are too often dominated by conventional methods. While changing assessment depends on many factors, improvements to assessment ultimately depend on the decisions and actions of individual educators. This paper considers research within the ‘heuristics and biases’ tradition in the field of decision-making and judgement which has identified unconscious factors with the potential to limit capacity for such change. The paper focuses on issues that may compromise the process of improving assessment by supporting a reluctance to change existing tasks, by limiting the time allocated to develop alternative assessment tasks, by underestimating the degree of change needed or by an unwarranted overconfidence in assessment design decisions. The paper proposes countering these unconscious limitations to change by requiring justification for changing, or not changing, assessment tasks, and by informal and formal peer review of assessment task design. Finally, an agenda for research on heuristics and biases in assessment design is suggested in order to establish their presence and help counter their influence.

So how can we address these limitations? In Australia, assessment changes are heavily regulated at most universities, requiring paperwork for even the most minor changes. In my opinion, this promotes inertia; the easy option is to do nothing. What if we required a justification to keep things the same, rather than just requiring a justification for change?

As always, please get in touch if you want to discuss or if you need help getting a copy of the article.

The ‘practice’ of implementing new assessments

How does an educator go from having an idea for a new assessment, to having it implemented in their course? In a recently published paper in Studies in Higher Education, we used practice theory to help understand ‘bringing a new task into being’. We hope that using practice theory as a way to understand assessment might help us move beyond just measurement and learning, to understand the sayings and doings, contexts, relationships and materials of assessment:

Assessment as a field of investigation has been influenced by a limited number of perspectives. These have focused assessment research in particular ways that have emphasised measurement, or student learning or institutional policies. The aim of this paper is to view the phenomenon of assessment from a practice perspective drawing upon ideas from practice theory. Such a view places assessment practices as central. This perspective is illustrated using data from an empirical study of assessment decision-making and uses as an exemplar the identified practice of ‘bringing a new assessment task into being’. It is suggested that a practice perspective can position assessment as integral to curriculum practices and end separations of assessment from teaching and learning. It enables research on assessment to de-centre measurement and take account of the wider range of people, phenomena and things that constitute it.

Details on the article are below. The first 50 people to click that link get a free copy of the article; get in touch with me if you have any difficulties.

Reframing assessment research: through a practice perspective. Studies in Higher Education, 2016. Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G. & Molloy, E.

New project “Feedback for learning: closing the assessment loop”

I’m delighted to be part of a team on a new $280k Office for Learning and Teaching project titled “Feedback for learning: closing the assessment loop”. The project is led by A/Prof Michael Henderson from Monash University:

Feedback (during and after) assessment tasks is critical for effectively promoting student learning. Without feedback students are limited in how they can make judgements as to their progress, and how they can change their future performance. Feedback is the lynchpin to students’ effective decision making, and the basis of improved learning outcomes. However, feedback is under-utilised and often misunderstood by both students and academics. This project is about improving student learning (and experience) through improving institutional, academic, and student capacity to stimulate and leverage assessment feedback.

The aim of this project is to improve student learning and experience by improving the way in which the Australian Higher Education sector enacts feedback. Our approach will deliver a pragmatic, empirically based framework of feedback designs to guide academics, academic developers and instructional designers, as well as institutional policy. This will be supported by large scale data highlighting patterns of success and 10 rich cases of feedback designs to demonstrate how that success can be achieved. In addition, this project will facilitate the likelihood of adoption through a series of dissemination activities including national workshops built on a participatory design approach.

More to come as it’s available.

4 ways technology shapes assessment designs

As part of the Assessment Design Decisions project, we spoke with 33 Australian university educators about how technology influences their assessment design processes. We recently published a paper in the British Journal of Educational Technology with our results. Our four key themes are:

  1. Technology is enmeshed in the ‘economics of assessment’
  2. Technology is seen as ‘contemporary and innovative’
  3. Technology aims to shape student behavior – and technology is shaped by student behavior
  4. Support and compromise were necessary for technology to really support assessment

Details on the article are below. Please get in touch if you want to discuss or if you need help getting a copy of the article.

How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 2016. Bennett, S., Dawson, P., Bearman, M., Molloy, E. & Boud, D.

A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing technology-supported assessment strategies. This paper reports on the experience of designing technology-supported assessment based on interviews with 33 Australian university teachers. The findings reveal the desire to achieve greater efficiencies and to be contemporary and innovative as key drivers of technology adoption for assessment. Participants sought to shape student behaviors through their designs and made adaptations in response to positive feedback and undesirable outcomes. Many designs required modification because of a lack of appropriate support, leading to compromise and, in some cases, abandonment. These findings highlight the challenges to effective technology-supported assessment design and demonstrate the difficulties university teachers face when attempting to negotiate mixed messages within institutions and the demands of design work. We use these findings to suggest opportunities to improve support by offering pedagogical guidance and technical help at critical stages of the design process and encouraging an iterative approach to design.

Support for assessment practice: developing the Assessment Design Decisions Framework

promoIn 2012 I co-led a team with Margaret Bearman to investigate the question: “How do university teachers make decisions about assessment?” This led us to talk with academics from around the country on how they do their assessment design work – and what we can do to help. We ended up producing the Assessment Design Decisions suite of resources, with support from an Office for Learning and Teaching grant. We just had an important paper published from that project which shows the working behind those resources.

Support for assessment practice: developing the Assessment Design Decisions Framework Teaching in Higher Education, 2016. Bearman, M., Dawson, P., Boud, D., Bennett, S, Hall, M. & Molloy, E.

There are many excellent publications outlining features of assessment and feedback design in higher education. However, university educators often find these ideas challenging to realise in practice, as much of the literature focuses on institutional change rather than supporting academics. This paper describes the conceptual development of a practical framework designed to stimulate educators’ thinking when creating or modifying assessments. We explain the concepts that underpin this practical support, including the notions of ‘assessment decisions’ and ‘assessment design phases’, as informed by relevant literature and empirical data. We also present the outcome of this work. The Assessment Design Decisions Framework. This provides key considerations in six categories: purposes, contexts, tasks, interactions, feedback processes and learning outcomes. By tracing the development of the Framework, we highlight complex ways of thinking about assessment that are relevant to those who design and deliver assessment to tertiary students.

Reflective practice on reflective practice

The most watched YouTube video on my channel is my 2012 video on Reflective Practice. Recently, Clive Buckley from Glyndwr University invited me to expand on that video for his MSc Learning and Technology students. Here is the result – a sort of reflective practice on reflective practice:

Here is the original 2012 video:

A fun piece of trivia: this video was recorded at home while my son was a few weeks old. My wife was wheeling him around the block in the pram and I only had one take to get it right!

Moving online: the future of universities in the online world

I spoke with Claire Nichols from ABC Radio National’s Summer Breakfast program about the future of lectures and exams.

It’s an anxious time for many school leavers as they wait to receive their university offers for this year.

When they do begin classes in the coming weeks, it’s likely to be a very different learning experience that that of a few years ago.

More classes will be delivered online, with lectures becoming less common, and even the dreaded end-of-semester exam could be on its way out.

Is online the better ways to learn subjects? And how are universities going about implementing this change to how subjects are taught?

Download the podcast here.