Think of university assessment and it probably conjures anxiety. As David Boud notes:
even successful, able and committed students – those who become university teachers – have been hurt by their experiences of assessment, time and time again, through school and through higher education.
Even the very language of assessment, essays and exams is etymologically closer to torture, taxation and trial than anything educational. It is understandable that we grow fearful of it. But within the world of assessment, nothing seems more polarising than the multiple-choice question.
Recently multiple-choice questions have come under fire, with one Australian university abandoning their use in high-stakes exams. In Australian higher education, the format is variously derided as unhelpful for learning or overly prone to guessing. So are multiple-choice questions a) good or b) bad? The correct answer is c) neither of the above; it depends on how they are used.
One concern raised by critics of multiple-choice questions is that students can guess their way towards a substantial percentage of their grade. This concern has occupied the research community for decades. Many different approaches have been found to reduce the effect of guessing.
For example, you can split up one multiple-choice question into a series of true/false questions. Or you might require the student to select both of two correct responses out of a set of five. Both of these approaches make guessing much less effective.
Another approach to reducing guessing is crafting devious distractor answers that are almost correct. Unfortunately this can have a negative effect on learning: one study found that students sometimes walk away from the test believing the distractors. But although some students learned falsehoods from the test, it was outweighed by what is referred to as the “testing effect”: under certain circumstances, taking a test can lead to enhanced long-term memory of what was in the test.
Many universities are moving towards online examination approaches, which often rely heavily on automatically marked multiple-choice questions. Although seen as a dystopian future by some, automatic marking can also provide immediate feedback. This not only overcomes the problem of learning falsehoods from the test, it can even lead to improved learning from the test.
Research published earlier this month finds that multiple-choice questions can even help stabilise access to “marginal knowledge”; those tip-of-the-tongue factoids that you know you know but can’t recall easily. So if you know who wrote The Geebung Polo Club but can’t recall right now, a well-crafted multiple-choice question can help you access that information and lock it in for the future.
Amazingly, feedback is not even required to benefit from knowledge stabilisation via multiple-choice questions. This is a particularly useful tool for educators at the start of the semester, as multiple-choice questions can help students stabilise access to knowledge that might otherwise be inaccessible, while at the same time help the educator gauge student understanding.
There is certainly a place for multiple-choice questions when many students need to be assessed on substantial quantities of lower-level knowledge; the sorts of learning outcomes that begin with words like “identify”. Clever multiple-choice questions are often developed to address higher-order learning too.
Although good multiple-choice questions take time to write, they can be very resource-efficient with large cohorts. Using multiple-choice questions where they are appropriate can free up assessment resources to be used in innovative ways elsewhere in a course.
There is even work underway to use multiple-choice questions to construct innovative types of assessment. Education researcher Robert Nelson and I have used multiple-choice questions to simulate a conversation with a student, and have used this approach to simultaneously teach and assess. We have even applied this to contentious topics that have a reputation of being multiple-choice-unfriendly, such as teaching research ethics.
Multiple-choice questions are but one option on the assessor’s palette. Like all others they have their own pros and cons, and situations where they are most useful.
Decisions to use one form of assessment over another are complex and not very well understood. But bans on any particular type of assessment are probably not a wise move. Instead, we should provide targeted support and professional development for academics to help decide when to use different types of assessment.
Phillip Dawson receives funding from the Office for Learning and Teaching.