The ‘practice’ of implementing new assessments

How does an educator go from having an idea for a new assessment, to having it implemented in their course? In a recently published paper in Studies in Higher Education, we used practice theory to help understand ‘bringing a new task into being’. We hope that using practice theory as a way to understand assessment might help us move beyond just measurement and learning, to understand the sayings and doings, contexts, relationships and materials of assessment:

Assessment as a field of investigation has been influenced by a limited number of perspectives. These have focused assessment research in particular ways that have emphasised measurement, or student learning or institutional policies. The aim of this paper is to view the phenomenon of assessment from a practice perspective drawing upon ideas from practice theory. Such a view places assessment practices as central. This perspective is illustrated using data from an empirical study of assessment decision-making and uses as an exemplar the identified practice of ‘bringing a new assessment task into being’. It is suggested that a practice perspective can position assessment as integral to curriculum practices and end separations of assessment from teaching and learning. It enables research on assessment to de-centre measurement and take account of the wider range of people, phenomena and things that constitute it.

Details on the article are below. The first 50 people to click that link get a free copy of the article; get in touch with me if you have any difficulties.

Reframing assessment research: through a practice perspective. Studies in Higher Education, 2016. Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G. & Molloy, E.

New project “Feedback for learning: closing the assessment loop”

I’m delighted to be part of a team on a new $280k Office for Learning and Teaching project titled “Feedback for learning: closing the assessment loop”. The project is led by A/Prof Michael Henderson from Monash University:

Feedback (during and after) assessment tasks is critical for effectively promoting student learning. Without feedback students are limited in how they can make judgements as to their progress, and how they can change their future performance. Feedback is the lynchpin to students’ effective decision making, and the basis of improved learning outcomes. However, feedback is under-utilised and often misunderstood by both students and academics. This project is about improving student learning (and experience) through improving institutional, academic, and student capacity to stimulate and leverage assessment feedback.

The aim of this project is to improve student learning and experience by improving the way in which the Australian Higher Education sector enacts feedback. Our approach will deliver a pragmatic, empirically based framework of feedback designs to guide academics, academic developers and instructional designers, as well as institutional policy. This will be supported by large scale data highlighting patterns of success and 10 rich cases of feedback designs to demonstrate how that success can be achieved. In addition, this project will facilitate the likelihood of adoption through a series of dissemination activities including national workshops built on a participatory design approach.

More to come as it’s available.

4 ways technology shapes assessment designs

As part of the Assessment Design Decisions project, we spoke with 33 Australian university educators about how technology influences their assessment design processes. We recently published a paper in the British Journal of Educational Technology with our results. Our four key themes are:

  1. Technology is enmeshed in the ‘economics of assessment’
  2. Technology is seen as ‘contemporary and innovative’
  3. Technology aims to shape student behavior – and technology is shaped by student behavior
  4. Support and compromise were necessary for technology to really support assessment

Details on the article are below. Please get in touch if you want to discuss or if you need help getting a copy of the article.

How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 2016. Bennett, S., Dawson, P., Bearman, M., Molloy, E. & Boud, D.

A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing technology-supported assessment strategies. This paper reports on the experience of designing technology-supported assessment based on interviews with 33 Australian university teachers. The findings reveal the desire to achieve greater efficiencies and to be contemporary and innovative as key drivers of technology adoption for assessment. Participants sought to shape student behaviors through their designs and made adaptations in response to positive feedback and undesirable outcomes. Many designs required modification because of a lack of appropriate support, leading to compromise and, in some cases, abandonment. These findings highlight the challenges to effective technology-supported assessment design and demonstrate the difficulties university teachers face when attempting to negotiate mixed messages within institutions and the demands of design work. We use these findings to suggest opportunities to improve support by offering pedagogical guidance and technical help at critical stages of the design process and encouraging an iterative approach to design.

Support for assessment practice: developing the Assessment Design Decisions Framework

promoIn 2012 I co-led a team with Margaret Bearman to investigate the question: “How do university teachers make decisions about assessment?” This led us to talk with academics from around the country on how they do their assessment design work – and what we can do to help. We ended up producing the Assessment Design Decisions suite of resources, with support from an Office for Learning and Teaching grant. We just had an important paper published from that project which shows the working behind those resources.

Support for assessment practice: developing the Assessment Design Decisions Framework Teaching in Higher Education, 2016. Bearman, M., Dawson, P., Boud, D., Bennett, S, Hall, M. & Molloy, E.

There are many excellent publications outlining features of assessment and feedback design in higher education. However, university educators often find these ideas challenging to realise in practice, as much of the literature focuses on institutional change rather than supporting academics. This paper describes the conceptual development of a practical framework designed to stimulate educators’ thinking when creating or modifying assessments. We explain the concepts that underpin this practical support, including the notions of ‘assessment decisions’ and ‘assessment design phases’, as informed by relevant literature and empirical data. We also present the outcome of this work. The Assessment Design Decisions Framework. This provides key considerations in six categories: purposes, contexts, tasks, interactions, feedback processes and learning outcomes. By tracing the development of the Framework, we highlight complex ways of thinking about assessment that are relevant to those who design and deliver assessment to tertiary students.

Reflective practice on reflective practice

The most watched YouTube video on my channel is my 2012 video on Reflective Practice. Recently, Clive Buckley from Glyndwr University invited me to expand on that video for his MSc Learning and Technology students. Here is the result – a sort of reflective practice on reflective practice:

Here is the original 2012 video:

A fun piece of trivia: this video was recorded at home while my son was a few weeks old. My wife was wheeling him around the block in the pram and I only had one take to get it right!

Moving online: the future of universities in the online world

I spoke with Claire Nichols from ABC Radio National’s Summer Breakfast program about the future of lectures and exams.

It’s an anxious time for many school leavers as they wait to receive their university offers for this year.

When they do begin classes in the coming weeks, it’s likely to be a very different learning experience that that of a few years ago.

More classes will be delivered online, with lectures becoming less common, and even the dreaded end-of-semester exam could be on its way out.

Is online the better ways to learn subjects? And how are universities going about implementing this change to how subjects are taught?

Download the podcast here.

Will the University of Adelaide's lecture phase-out be a flop?

The University of Adelaide is planning to completely phase out lectures. In their place will be online materials and small group face-to-face sessions. According to University of Adelaide Vice-Chancellor Warren Bebbington, the lecture is dead – and it is not coming back.

Lectures have been around for hundreds of years. They have survived other technological revolutions, including the printing press and the motion picture. Adelaide will be the first university in Australia to break with tradition and eliminate them entirely. But is this change good for learning?

‘Flipped’ classrooms

The University of Adelaide’s move is part of a growing trend to “flip the classroom” by swapping what students do in class with what they do out of class. The flipped classroom is where lectures and other passive learning activities take place at home, while problems, questions and other activities that require socialising and interaction take place in the presence of the teacher.

This means students have to complete pre- and/or post-class activities to fully benefit from in-class work.

Research on the effectiveness of the flipped classroom approach is beginning to trickle out, but it’s not necessarily an evidence-based practice yet. However, if we examine the components of this approach, the outlook is positive.

If the lectures Adelaide is ditching are monologues without any interactivity, then video is probably going to be a good replacement. Decades of research suggest this is not a great use of precious face-to-face time; some have even claimed lectures are as bad for learning as smoking is for health.

Since the 1920s, researchers have been conducting “media comparison studies” where the same teaching approach (for example, the lecture) is applied to two or more media (one is usually face-to-face).

These studies began with emerging approaches like correspondence courses and radio, and later progressed onto video teaching. When we pool together these studies we find, on average, that there is no significant difference in learning between different media – assuming we teach in the same way. So learning won’t be much better, or much worse, from a face-to-face or video lecture.

While there may be no significant difference in learning, online lectures put students in control. There is evidence that students fast-forward through parts they already understand, and re-watch parts they struggle with. Researchers call this “learner pacing” and it has been found to help students manage the cognitive demands of their studies. Learner pacing can even mitigate against some bad teaching approaches.

Ban lectures or just change them?

If the choice is between being talked at non-stop for an hour face-to-face or by video, then please give me the video. The problem is, this is rarely the choice.

Delving deeper into the damning evidence on lectures, it turns out that only classes that were more than 90% passive listening were “as bad as smoking”. Walk into a modern lecture and you’ll be unlikely to find a 60-minute monologue. It’s more a caricature than a common practice. Bebbington claimed the lecture is dead, but really it just evolved.

The anti-lecture evidence actually just supports good lecturing practice: require students to spend at least 10% of the lecture discussing or problem solving.

If Adelaide’s lectures are long speeches, put them online. Or even better, divide them into smaller chunks first, as lecture video length strongly influences attention. But if students are already required to be active in lectures, then it’s a more subjective decision.

Another challenge of Adelaide’s new model is that class time becomes more dependent on students completing their pre-class tasks (for example, watching the video). When students aren’t prepared for their small group learning session, it turns a flipped classroom into a “flop”, because the teacher needs to catch up some of the students.

The good news is that flipped classroom approaches like Adelaide’s may help students develop a sense of autonomy, feel competent, and get connected with other students. Developing these attributes should lead to improved motivation – and hopefully result in students preparing for class. However, this connection between flipped classrooms and motivation remains at best a theoretically informed hunch.

So, is the move to phase out lectures supported by the evidence? I’m always wary about blanket bans on any particular approach to teaching or assessment. It really comes down to the individual lecture, and whether Bebbington’s classrooms flip or flop.

The Conversation

Phillip Dawson is Associate Professor and Associate Director, Centre for Research in Assessment and Digital Learning at Deakin University.

This article was originally published on The Conversation.
Read the original article.

Policing won’t be enough to prevent pay-for plagiarism

Buying and selling high-stakes assessments is bad for education. It undermines community confidence because we can’t be sure if a grade was earned or bought. Plagiarism hurts plagiarists too, because they miss out on the learning opportunities that the assessment was supposed to provide. Tensions around plagiarism may be part of a culture of distrust between teachers and students.

Recently, it was revealed that high school students in NSW are buying essays made-to-order online for little more than A$100. University assignments can be more expensive, costing up to $1000 from the controversial (and now-defunct) MyMaster website.

With the recent media attention, we could be fooled into thinking pay-for plagiarism is a modern, high-tech invention. However, the internet merely supports the logistics. Pay-for plagiarism is much older than computers – many of your favorite books were “ghostwritten”.

The difficulties in policing

The problem is that pay-for plagiarism is very difficult to police. Unlike “copy-paste” plagiarism or using an assignment that a previous student submitted, each pay-for assignment is made-to-order. We can’t just compare student work against a database of sources because each assignment is a bespoke creation.

Identifying exactly who wrote a particular piece of text is a hard problem. Disputes about authorship date back to biblical times – even the bible itself has books with disputed authorship. New technology may help discern if a student wrote a particular piece, but it is far from perfect, and far from application in a mass education context.

As anti-plagiarism enforcement gets smarter, so do the plagiarists. While we may be able to spot a ghostwritten university-level essay submitted by a struggling high school student, this is a rookie pay-for plagiarism mistake. Smart plagiarists rework the essays they pay for, or even employ techniques like “back-translation” by running plagiarised text through tools like Google Translate.

Some high-end services will even produce a tailored assignment just for you, based on analysis of your previous writing style. Techniques like these make it difficult to detect plagiarised work.

The possible way forward

Policing pay-for plagiarism may work to some extent, but it won’t completely solve the problem. So, what are our alternatives? How can we complement an enforcement approach?

NSW Teachers Federation president Maurie Mulheron favours requiring students to complete all assessments in class. Students can’t pay for someone else to do their work for them if the teacher is watching.

However, this approach creates further problems. The classroom environment is not an “authentic” environment for some of the tasks teachers set students. Consider an in-class essay versus a take-home essay assignment. Even in disciplines like history where an essay might be a true representation of what professional practitioners do, a stressful classroom and time limit can lead to students producing different work.

Mulheron’s approach would tell us much about what students are capable of within a classroom environment, but surely we want to know what they can do in the real world too.

Clever assessment design may be another part of the solution. Assessment that builds on the student’s own experiences, classwork, prior drafts and feedback is more challenging to ghostwrite. We can also build sequences of tasks that have a small mandatory supervised component. This is commonly implemented at universities as an exam that needs to be passed to pass a unit.

Above all else, we should examine the root causes of pay-for plagiarism. One study into the reasons higher education students plagiarise – the study was not restricted to pay-for plagiarism – found a variety of factors that we can learn from. One of these factors was pressure: time pressure, stress, pressure from family, and pressure from society.

This may be a factor for students paying for HSC assignments as well. For example, students at one school were apparently told they would be kicked out if their work was not good enough. Perceptions that poor performance will be punished, rather than addressed with support, may make pay-for plagiarism an attractive option.

Other issues in the study included teaching and learning issues (ranging from workload to bad teaching), laziness or convenience, and – my favourite – “pride in plagiarising”. Better detection of ghostwriting will not completely address these issues.

Solving the pay-for plagiarism problem requires us to understand why paying $1000 seems like a better choice than completing a particular assignment. Cheating students are definitely in the wrong, but when placed in a high-stakes, high-stress environment, they may feel like they have few other options. We need to change this.

The Conversation

Phillip Dawson is Associate Professor and Associate Director, Centre for Research in Assessment and Digital Learning at Deakin University.

This article was originally published on The Conversation.
Read the original article.