Reducing Contract Cheating – Challenges And Solutions – Presentation Slides

How can we deter and detect contract cheating at all levels within an institution? That was the topic Irene Glendinning and I discussed on a webinar in the run up to the 4th International Day of Action Against Contract Cheating.

Here are the slides we used.


You can also access these slides and those from many of my other presentations on my SlideShare account.

We also looked at the process of detecting and reporting on contract cheating. What are the steps to take to ensure that this process is fair and consistent? Such considerations are important as we try and preserve academic integrity throughout the process.

You can also watch the video recording of the webinar here.

Contract Cheating and Essay Mills 2017 Findings Part 4 – Detecting Contract Cheating

This is Part 4 of the 7 part series examining Findings From Plagiarism Across Europe and Beyond 2017

Mixed views continue to exist across the sector as to whether we should be trying to detect contract cheating. I have always felt that this is a duty of academics to protect the value of assessments for other students, but I can see why views vary here. There should be a deterrent factor as part of the risk of being caught. But I was interested to hear the views from student advocates at the conference, as presented by Wendy Sutherland-Smith. Advocates felt that students wanted to see people who were contract cheating get caught.

This blog post focuses mainly on technology. Before anyone shouts of me, I should establish that we’re not yet in a position where we can assume that technology can detect contract cheating. However, we may perhaps be able to use technology to identify work that may be unoriginal, with the onus then being on a human to make a judgement. This is analogous to how software that identifies similarity between texts can be used as part of a plagiarism detection process.

Stylometric Analysis

Two conference presentations focused on software and techniques designed to identify the author of different documents. This is something of a trend right now, as I know of two further groups in the UK actively working on this (and I’m sure that there are other people working on this who I’m not aware of). Further, at the conference I found out about a current tool used for plagiarism detection that already has this built in. I also know about one of the largest software providers in the plagiarism field who is actively working to add writing style analysis to their software.

From the conference presentations, two quite different approaches were proposed.

Patrick Juola runs a company that aims to automate authorship attribution and proposed an approach that sounds simple on paper. He suggested collecting assessment submissions from students throughout their course. Their most recent document is then compared with the one submitted immediately prior using an automated process to see if both documents have been written by the same person. If not, there is cause for concern.

This is an approach that certainly sounds like it could have some merit, but this does also need to be supplemented by details of exactly what measures are being compared.

Rui Sousa-Silva looked at how authorship attribution software could be used by people who saw a document that they thought may have been written by someone other than the student submitting the work. He gave several examples using Wordsmith Tools. Here, an investigator would compare the suspect document with other work written by the student. This way of thinking about authorship did provide more detail, but I do still feel that a lot more training would be needed to help many staff feel comfortable with relying on this type of software, as well as understanding of the software by all involved with academic conduct investigations.

I made limited progress on the use of stylometry for both plagiarism and contract cheating detection in the early to mid-2000s, mostly working with students. Although a few results found their way into talks and papers, I was never able to devote sufficient time to this. So, it’s good to see other people taking up the mantle. There are still issues to overcome with ensuring the reliability of these stylometric approaches, as well as ensuring that student assignments are widely and systematically collected.

Other Approaches To Detection

Tracey Bretag presented results from a survey of academic staff from Australian universities based on how they had detected contract cheating. The most common responses given were detection by knowledge of the student’s academic ability (71%) and knowledge of their English language ability (62%). Both of these approaches are valid but can be difficult when anonymous marking is used.
Tracey also found that 49% of academics were alerted to contract cheating through text matching from similarity detection software, such as Turnitin. This is an interesting result from several perspectives. First, it supports some of my previous research looking at the use of Turnitin to detect contract cheating. Second, it casts doubt on the claim of many essay providers that they are providing a plagiarism free assignment. This does suggest that, even as new approaches are introduced, the use of software designed to detect plagiarism is still essential.

In research with student advocates presented by Wendy Sutherland-Smith, she also found suggestions for the use of software, particularly to identify students. One suggestion was keystroke analysis, a technique with some overlap to writing style analysis. I also noticed a suggestion of eye detection. Whether this involves tracking the eyeline of a student to ensure that it’s focused on assessment tasks, or whether this involves iris scanning to ensure that the correct student is sitting examinations and submitted work is not clear.

Perhaps all of these different approaches for detecting contract cheating have some merit?

Developing New Approaches For The Automated Detection Of Contract Cheating

Some of the recent published work by Ann Rogerson is of interest to those of us involved with contract cheating research. Ann has been looking at the indicators within texts that might show that work has not been written by the student who handed it in.

The work was presented at the 6th International Integrity and Plagiarism Conference. Unfortunately, I wasn’t able to attend and the full paper is not yet available online, but the abstract can be accessed here. There is also an easily accessible version of the findings courtesy of this Times Higher Education article on tips for detecting and beating cheating.

Indicators From The Cheating And Plagiarism Processes

During the contract cheating and plagiarism processes, there are often indicators that find their way into student work. For plagiarism from the web, this is the simplest, since the indicators relate to the site from which the work has been sourced. This is the same, whether the document is compiled in patchwork manner from multiple sites, or lifted directly from a site containing ready written essays.

For contract cheating, the process of finding indicators can be more difficult, but clues do left behind. Searching for these clues may well be a method which can be automated, or at least computer supported. However, there is still an underlying computer algorithm problem about how to identify these clues and how to minimise the number of false positive results that are obtained.

Some Indicators For Automated Investigation

One suggestion from the Times Higher Article is to look for inconsistencies within the text. For instance, this could be when the same document contains both perfect English and examples heavy with automated errors. Some of the ongoing work into using stylometrics to detect contract cheating could be useful here, which look for changes in writing style within a single document. Monitoring the quality of English would be a metric which could be tracked within different sections of a longer document. This would allow outliers to be shown.

Another possible indicator listed is that of “blandness”, tracking students who write in very generic terms and do not show any real insight. To me, that is mainly an issue of assessment design, as many of the best assignment briefs and marking schemes do insist upon that additional level of insight which makes taking work from a third party source difficult. How easy this would be to detect using an automated system is questionable. It may, for instance, be possible to develop a system to look for indicators that a student has local knowledge. The corresponding absence of such indicators could then provide a measure of blandness.

The final suggestion from Ann Rogerson is to check the correctness of references. Examples of made up references are given, although any intelligent computerised system needs to be wary of students who have created fictitious references and those who just have poor referencing skills. An additional complication here exists when students include legitimate references, but these do not actually support the information provided within the submitted text. Measures of the quality of referencing, and of the types of sources referenced, are already available. These techniques could be developed to use with automated systems. Alternatively, search mechanisms could be used to find common types of references that are not appropriate.

Further Considerations For Automated Contract Cheating Detection

There is an additional process consideration that is needed for the information in the article to be easily applied. First, there has to be a mechanism through which students can be called to verbally account for their work. This has to be supported by university cheating and plagiarism regulations. Many current regulations require the indication of a source document before cheating cases are considered. Just “a feeling” that the work was not written by the student would not be sufficient to allow a case or an interview to proceed.

Overall, automated detection of contract cheating is one area of research which is struggling, in terms of getting appropriate interest levels from researchers and in terms of finding techniques that work. Adding in new approaches like the ones suggested in this post, where several different indicators need to be found, may be the breakthrough that this field of research needs.

An Initial Analysis Of The Contextual Information Available Within Auction Posts On Contract Cheating Agency Websites Video Presentation

Here is a presentation of a recent paper I coauthored with Robert Clarke presented in video form. The presentation muses on the detection opportunities afforded to contract cheating through a consideration of the wider context on which requests on agency sites are made.

The video is also available on the YouTube account for Thomas Lancaster

Automated methods of detecting contract cheating are one of the areas in which there is currently a real need. Whilst people like Robert Clarke provide a human detection process, this can never be 100% successful. For instance, there’s no way that a human can be continually monitoring every post on an agency site such as Freelancer.

Whilst an intelligent system would not be a sole solution to this problem, since it would require human checking, it would also increase the consistency through which posts are monitored, and would allow more information from such sites to be captured.

The Prevention And Detection Of Contract Cheating

This is the second of two talks I delivered on contract cheating at a Birmingham City University workshop funded by the Higher Education Academy.

The focus of this talk was on three areas for people considering what to do about contract cheating: prevention, detection and policy. Several of the slides are prompt led and this generated a lot of discussion.

The slides, also available on SlideShare account for Thomas Lancaster, are provided here.

Some universities do still struggle to keep their academic integrity policies up-to-date, or these are only reviewed every few years. Such an approach is dangerous in a world where technology can rapidly change the cheating landscape.

There is also the policy question about where contract cheating begins. Does this start when a student submits work that they have outsourced, or is the mere request to outsource work the starting point. Personally, I favour the latter point, but many policies require the student to have completed the process and submitted bespoke work created by another person, which can be challenging to prove.