As one of my previous posts has indicated, it is rare for assessment and plagiarism to considered as equal topics within educational research.
The book chapter, Assessment and Plagiarism, by Thomas Lancaster (me), Anthony Robins and Sally Fincher addresses that issue for the computing discipline. It is part of The Cambridge Handbook for Computing Education Research, a book that “describes the extent and shape of computing education research today“.
As well as discussing the importance of assessment and taking steps to minimise plagiarism, the chapter focuses specifically on techniques that are most suitable for computing. The chapter also provides recommendations for future research in the field.
In this post, I’ve picked out five ideas for research opportunities from the chapter that have implications for multiple disciplines (beyond Computing). Of course, you should still read the full chapter for more ideas and a lot of background that will help any future research plans (and make the literature review sections of papers much easier to complete).
Collated and Reusable Assessments
In previous years, there have been pushes across the sector to build up collections of reusable learning components, including assessment banks intended for wider use. How well are those projects working? What measures are taken to keep the assessment banks up to date? Do students and educators see value in these activities continuing? And how can plagiarism and contract cheating be avoided with these standard assessments?
This isn’t a new topic for the blog (see these posts), but is still one that hasn’t been widely investigated. When a student automatically converts one version of an essay to another, perhaps through back translation, how can this plagiarism be detected? Are there indicators that academics should be looking out for when they are marking? Or are indicators that a machine could identify? Failing that, could multiple versions of an assignment be generated in multiple languages to use with text matching software?
Academic Integrity Processes
It is thought that these still vary greatly across the sector. Is that the case? More specifically, what about at discipline level? Are processes applied consistently and are penalties (when necessary) given out in a fair manner? What recommendations exist for best practice at a discipline level?
Gamification of Assessment
Gamification techniques are now widely used across many walks of life, everything from encouraging continued play of computer games to getting people to continue to shop in certain ways. How far will these techniques work with assessment? Are there methods that will make assessment more engaging and encourage students to develop their understanding to a more in-depth level than they otherwise would have done?
Many methods have been developed to reduce the burden of assessment on educators, including using automated techniques that have different levels of success. At one end of the scale, there are systems that will automatically mark essays, although this is usually through metric based assessment writing style and keyword analysis of content. There are also many systems for marking simple exam questions, such as multiple choice and short answer questions. Can these systems be developed further? Can better feedback be developed? There are also many ethical questions worthy of investigation such as, is it fair on students to have their work marked in this way?
Feel free to share your own ideas for good topics for future assessment and plagiarism research in the comments section.