Developing New Approaches For The Automated Detection Of Contract Cheating

Some of the recent published work by Ann Rogerson is of interest to those of us involved with contract cheating research. Ann has been looking at the indicators within texts that might show that work has not been written by the student who handed it in.

The work was presented at the 6th International Integrity and Plagiarism Conference. Unfortunately, I wasn’t able to attend and the full paper is not yet available online, but the abstract can be accessed here. There is also an easily accessible version of the findings courtesy of this Times Higher Education article on tips for detecting and beating cheating.

Indicators From The Cheating And Plagiarism Processes

During the contract cheating and plagiarism processes, there are often indicators that find their way into student work. For plagiarism from the web, this is the simplest, since the indicators relate to the site from which the work has been sourced. This is the same, whether the document is compiled in patchwork manner from multiple sites, or lifted directly from a site containing ready written essays.

For contract cheating, the process of finding indicators can be more difficult, but clues do left behind. Searching for these clues may well be a method which can be automated, or at least computer supported. However, there is still an underlying computer algorithm problem about how to identify these clues and how to minimise the number of false positive results that are obtained.

Some Indicators For Automated Investigation

One suggestion from the Times Higher Article is to look for inconsistencies within the text. For instance, this could be when the same document contains both perfect English and examples heavy with automated errors. Some of the ongoing work into using stylometrics to detect contract cheating could be useful here, which look for changes in writing style within a single document. Monitoring the quality of English would be a metric which could be tracked within different sections of a longer document. This would allow outliers to be shown.

Another possible indicator listed is that of “blandness”, tracking students who write in very generic terms and do not show any real insight. To me, that is mainly an issue of assessment design, as many of the best assignment briefs and marking schemes do insist upon that additional level of insight which makes taking work from a third party source difficult. How easy this would be to detect using an automated system is questionable. It may, for instance, be possible to develop a system to look for indicators that a student has local knowledge. The corresponding absence of such indicators could then provide a measure of blandness.

The final suggestion from Ann Rogerson is to check the correctness of references. Examples of made up references are given, although any intelligent computerised system needs to be wary of students who have created fictitious references and those who just have poor referencing skills. An additional complication here exists when students include legitimate references, but these do not actually support the information provided within the submitted text. Measures of the quality of referencing, and of the types of sources referenced, are already available. These techniques could be developed to use with automated systems. Alternatively, search mechanisms could be used to find common types of references that are not appropriate.

Further Considerations For Automated Contract Cheating Detection

There is an additional process consideration that is needed for the information in the article to be easily applied. First, there has to be a mechanism through which students can be called to verbally account for their work. This has to be supported by university cheating and plagiarism regulations. Many current regulations require the indication of a source document before cheating cases are considered. Just “a feeling” that the work was not written by the student would not be sufficient to allow a case or an interview to proceed.

Overall, automated detection of contract cheating is one area of research which is struggling, in terms of getting appropriate interest levels from researchers and in terms of finding techniques that work. Adding in new approaches like the ones suggested in this post, where several different indicators need to be found, may be the breakthrough that this field of research needs.

Subscribe / Share

Article by Thomas Lancaster

I am an experienced Computer Science academic, best known for research work into academic integrity, plagiarism and contract cheating. I have held leadership positions in several universities, with specialty in student recruitment and keen interest in working in partnership with students. Please browse around the blog and the links, and feel free to leave your thoughts.
Thomas Lancaster tagged this post with: , , , , , Read 165 articles by

One Comments

  1. Wasi Khan says:

    Nice to know someone else is also thinking of the detection of contract cheating! I totally agree with you on using ‘indicators’ for detection, but I guess we defer on what these indicators are.

    The type of system I advocate (and subsequently created) uses a proctoring-esque approach, and uses intelligent documents to detect contract cheating.

    Any questions/comments appreciated!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.