This is Part 4 of the 7 part series examining Findings From Plagiarism Across Europe and Beyond 2017
Mixed views continue to exist across the sector as to whether we should be trying to detect contract cheating. I have always felt that this is a duty of academics to protect the value of assessments for other students, but I can see why views vary here. There should be a deterrent factor as part of the risk of being caught. But I was interested to hear the views from student advocates at the conference, as presented by Wendy Sutherland-Smith. Advocates felt that students wanted to see people who were contract cheating get caught.
This blog post focuses mainly on technology. Before anyone shouts of me, I should establish that we’re not yet in a position where we can assume that technology can detect contract cheating. However, we may perhaps be able to use technology to identify work that may be unoriginal, with the onus then being on a human to make a judgement. This is analogous to how software that identifies similarity between texts can be used as part of a plagiarism detection process.
Two conference presentations focused on software and techniques designed to identify the author of different documents. This is something of a trend right now, as I know of two further groups in the UK actively working on this (and I’m sure that there are other people working on this who I’m not aware of). Further, at the conference I found out about a current tool used for plagiarism detection that already has this built in. I also know about one of the largest software providers in the plagiarism field who is actively working to add writing style analysis to their software.
From the conference presentations, two quite different approaches were proposed.
Patrick Juola runs a company that aims to automate authorship attribution and proposed an approach that sounds simple on paper. He suggested collecting assessment submissions from students throughout their course. Their most recent document is then compared with the one submitted immediately prior using an automated process to see if both documents have been written by the same person. If not, there is cause for concern.
This is an approach that certainly sounds like it could have some merit, but this does also need to be supplemented by details of exactly what measures are being compared.
Rui Sousa-Silva looked at how authorship attribution software could be used by people who saw a document that they thought may have been written by someone other than the student submitting the work. He gave several examples using Wordsmith Tools. Here, an investigator would compare the suspect document with other work written by the student. This way of thinking about authorship did provide more detail, but I do still feel that a lot more training would be needed to help many staff feel comfortable with relying on this type of software, as well as understanding of the software by all involved with academic conduct investigations.
I made limited progress on the use of stylometry for both plagiarism and contract cheating detection in the early to mid-2000s, mostly working with students. Although a few results found their way into talks and papers, I was never able to devote sufficient time to this. So, it’s good to see other people taking up the mantle. There are still issues to overcome with ensuring the reliability of these stylometric approaches, as well as ensuring that student assignments are widely and systematically collected.
Other Approaches To Detection
Tracey Bretag presented results from a survey of academic staff from Australian universities based on how they had detected contract cheating. The most common responses given were detection by knowledge of the student’s academic ability (71%) and knowledge of their English language ability (62%). Both of these approaches are valid but can be difficult when anonymous marking is used.
Tracey also found that 49% of academics were alerted to contract cheating through text matching from similarity detection software, such as Turnitin. This is an interesting result from several perspectives. First, it supports some of my previous research looking at the use of Turnitin to detect contract cheating. Second, it casts doubt on the claim of many essay providers that they are providing a plagiarism free assignment. This does suggest that, even as new approaches are introduced, the use of software designed to detect plagiarism is still essential.
In research with student advocates presented by Wendy Sutherland-Smith, she also found suggestions for the use of software, particularly to identify students. One suggestion was keystroke analysis, a technique with some overlap to writing style analysis. I also noticed a suggestion of eye detection. Whether this involves tracking the eyeline of a student to ensure that it’s focused on assessment tasks, or whether this involves iris scanning to ensure that the correct student is sitting examinations and submitted work is not clear.
Perhaps all of these different approaches for detecting contract cheating have some merit?