Assessing With Integrity – The Role of Technology – Webinar Slides

As part of my academic work with the UK’s Quality Assurance Agency I delivered a short webinar presentation looking at technology and assessment in light of Covid-19 and the move to online teaching.

The slides considered why academic integrity is still important, the threats posed to academic integrity in the current situation and the associated technological tools available to support education and quality assurance.

You can see the slides I used below (and also on my SlideShare account).


The webinar was well-received and led into an interesting panel discussion. One of the areas that attracted interest was my point that technology is a tool, not a solution. Software designed to identify similarity and detect plagiarism can be useful for helping students to improve their writing and ensuring academic integrity, but the use of such a tool does not mean that documents are all free of plagiarism. There are similar analogies to consider for software designed to ensure the integrity of online exams.

I do think the sector has done well to adapt to supporting students using teaching and assessment modalities that are new to everyone in a very short time period, but this does not mean that we should be complacent. We can use this as an opportunity to innovate and improve the standard of education for all students, whilst still preserving academic integrity.

Assessment and Plagiarism – Research Opportunities

As one of my previous posts has indicated, it is rare for assessment and plagiarism to considered as equal topics within educational research.

The book chapter, Assessment and Plagiarism, by Thomas Lancaster (me), Anthony Robins and Sally Fincher addresses that issue for the computing discipline. It is part of The Cambridge Handbook for Computing Education Research, a book that “describes the extent and shape of computing education research today“.

As well as discussing the importance of assessment and taking steps to minimise plagiarism, the chapter focuses specifically on techniques that are most suitable for computing. The chapter also provides recommendations for future research in the field.

In this post, I’ve picked out five ideas for research opportunities from the chapter that have implications for multiple disciplines (beyond Computing). Of course, you should still read the full chapter for more ideas and a lot of background that will help any future research plans (and make the literature review sections of papers much easier to complete).

Collated and Reusable Assessments

In previous years, there have been pushes across the sector to build up collections of reusable learning components, including assessment banks intended for wider use. How well are those projects working? What measures are taken to keep the assessment banks up to date? Do students and educators see value in these activities continuing? And how can plagiarism and contract cheating be avoided with these standard assessments?

Essay Spinning

This isn’t a new topic for the blog (see these posts), but is still one that hasn’t been widely investigated. When a student automatically converts one version of an essay to another, perhaps through back translation, how can this plagiarism be detected? Are there indicators that academics should be looking out for when they are marking? Or are indicators that a machine could identify? Failing that, could multiple versions of an assignment be generated in multiple languages to use with text matching software?

Academic Integrity Processes

It is thought that these still vary greatly across the sector. Is that the case? More specifically, what about at discipline level? Are processes applied consistently and are penalties (when necessary) given out in a fair manner? What recommendations exist for best practice at a discipline level?

Gamification of Assessment

Gamification techniques are now widely used across many walks of life, everything from encouraging continued play of computer games to getting people to continue to shop in certain ways. How far will these techniques work with assessment? Are there methods that will make assessment more engaging and encourage students to develop their understanding to a more in-depth level than they otherwise would have done?

Automated Assessment

Many methods have been developed to reduce the burden of assessment on educators, including using automated techniques that have different levels of success. At one end of the scale, there are systems that will automatically mark essays, although this is usually through metric based assessment writing style and keyword analysis of content. There are also many systems for marking simple exam questions, such as multiple choice and short answer questions. Can these systems be developed further? Can better feedback be developed? There are also many ethical questions worthy of investigation such as, is it fair on students to have their work marked in this way?

 

Feel free to share your own ideas for good topics for future assessment and plagiarism research in the comments section.

Plagiarism and Assessment

I regularly discuss issues relating to the assessment of student work when I give presentations on plagiarism, contract cheating and academic misconduct. Since good assessment design is essential to engage students and reduce the potential for cheating, I would find it very difficult to talk about plagiarism and not incorporate assessment into the mix.

It does seem that such an approach is not always true in general. Some work on plagiarism does incorporate assessment. However, work on assessment does not seem to as regularly to incorporate plagiarism.

 

Academic Papers Referring To Plagiarism And Assessment

The table below shows the number of matches on Google Scholar for the search terms assessment, plagiarism and assessment plagiarism. Patents and citations are excluded, so these searches generally map to academic publications.

all since 2013 since 2016 since 2017
assessment 5,570,000 996,000 371,000 104,000
plagiarism 312,000 31,400 29,100 13,100
assessment plagiarism 61,600 17,700 15,200 5,510

 

The overall figures suggest that 19.7% of papers on plagiarism also talk about assessment. However, only 1.1% of papers on assessment also talk about plagiarism.

This is, however, something of a simplistic measure, as academic papers use the word assessment to refer to subjects other than work with students. Topics cover such areas as the assessment of fish stock data sets, clinical assessments and the assessment of global warning. Looking through the first few pages of results, I’d estimate that around 1 in 10 uses of assessment actually refer to academic assessments.

By the same token, the rough numbers listed for plagiarism and assessment plagiarism are rather crude. Plagiarism, for instance, is used in other contexts, for instance when talking about plagiarism in books, in popular culture and as part of research misconduct. But this is relatively fair. I believe that it is fair to say that papers relating to plagiarism refer to assessment around twice as often as papers relating to assessment refer to plagiarism (20% compared with 11%).

The good news is that assessment and plagiarism research does seem to have more closely interlinked.

Making similar assumptions to those above:

  • since 2013, 56% of papers relating to plagiarism refer also to assessment, compared with 18% of papers on assessment referring to plagiarism
  • since 2016, 52% of papers relating to plagiarism refer also to assessment, compared with 41% of papers on assessment referring to plagiarism
  • since 2017, 42% of papers relating to plagiarism refer also to assessment, compared with 52% of papers on assessment referring to plagiarism

(the latter data set is relatively small, as 2017 is still in progress, so I would recommend treating that final result with caution)

The trend to relate these two areas does seem to be one that it moving in the right direction.

 

Academic Paper Titles Referring To Plagiarism And Assessment

To get an alternative measure, I repeated the search on Google Scholar looking for the words plagiarism and assessment in the paper titles.

You can do this using the useful intitle: search term, as below:

all since 2013 since 2016 since 2017
assessment 831,000 71,600 73,900 19,000
plagiarism 5,460 1,770 602 224
assessment plagiarism 60 30 8 2

(note that these figures suggest that papers on assessment were withdrawn between 2013 and 2016, but that is likely to be a glitch based on the way that Google estimates the size of large data sets like these – the overall trends still seem reasonable)

 

A quick verification of the matches suggests that the 10% figure for the proportion of the assessment results relating to education stills holds.

The results here are interesting in that, although the indications are that assessment and plagiarism are becoming increasingly mentioned in the same papers, this is not a strong link (it is rare to see both terms mentioned in the paper titles).

Looking at all four columns in the time period, the results are relatively similar:

  • between 0.9% and 1.7% of papers referring to plagiarism in the title also refer to assessment
  • between 0.07% and 0.4% of papers referring to assessment in the title also refer to plagiarism

There are few strong links between plagiarism and assessment in research papers. Where these strong links exists, they are almost always a paper on plagiarism that also incorporates assessment (not the other way around).

With that said, the relatively small number of papers demonstrating that they have closely considered plagiarism and assessment would look perfect to review for a focused literature review.

 

Research Flaws and Opportunities

The numbers here are very rough and ready. The approximation of the percentage of assessment papers relating to educational assessment is exactly that (a rough estimate) and may change from year-to-year. But I feel that there is enough here to illustrate general trends.

(there may also be some simple fixes for this – for instance, I wonder what the results would show if the word education was added to every search?)

Google Scholar, by its nature, is not a perfect system. It doesn’t record every paper, or with the same level of detail. And, sometimes non-papers slip in (I noticed a small number of assessment briefs with accompanying plagiarism statements in there).

It would be interesting to look at a corpus of abstracts to more accurately investigate the research links between plagiarism and assessment.

It would be useful to collect the results on a year-by-year basis to investigate trends, rather than rely on the general groups of dates that Google Scholar offers by default.

It would also be useful to examine alternative wording. For instance, is the term academic integrity linked with assessment research?

And, of course, similar techniques could be used to analyse research links between any two terms, even those completely outside of education.

Maybe I shall try some of those areas out when I have more time. Or, if anyone is interested in working with me on some data mining based research, let me know. There is certainly potential here as well to identify good terminology to use in academic paper titles (think search engine optimisation for academic research).

 

Web Pages on Assessment and Plagiarism

Even outside of pure academic research, these are rare.

Google finds only 953 web pages with both assessment and plagiarism in the title.

They are an interesting set of pages, many relating to regulations. Maybe I’ll talk more about that in a future post. The suggested related searches are also telling in many ways.

This can be plagiarism and assessment web page result #954.

 

Building Student Digital Capability In Computing And Digital Technologies Through A Hackathon Community

My first Staffordshire University Teaching and Learning Conference proved to be a useful day and a good chance for me to find out more about the digital initiatives in progress around the institution.

I presented on the benefits of hackathons and community for students, based on my previous work and observations of developments since, which I’m pleased to say are driven directly by students. I also discussed how hackathons could provide for elements of authentic assessment, an initiative which is often recommended as a solution for contract cheating.

You can see the slides used in the presentation on my SlideShare account. They are also embedded below.

The conference itself was interesting, sharing much good work going on around Staffordshire University and featuring a keynote presentation from Eric Stoller. Eric reminded the audience how useful it is for them to be active on social media and many of the great discussions taking place on Twitter to improve teaching and learning. I was glad to see one of my contributions featured in what was really a portfolio of tweets.

It seems that social media can bring a new zest to teaching and learning for even the most seasoned academic. Tony Bickley used the phrase “Twitter has changed my life” in his discussion, where he talked about all the new connections he’d made and the new ideas he’d had. There is certainly real value to developing a learning and support community outside of an internal university group.

I’ve also collected together a Storify with many of the tweets from the day.

Contract Cheating and Essay Mills 2017 Findings Part 6 – Which Students Are Contract Cheating And What Does This Mean For Assessment?

This is Part 6 of the 7 part series examining Findings From Plagiarism Across Europe and Beyond 2017

As Tracey Bretag said at the opening conference keynote, it is just not possible to set an assessment for which cheating is impossible. Despite that, there is still much good practice to be considered when setting assessments to benefit the students who engage with it.

Highly unusually, I think that this is the first conference I’ve been to in a while where I didn’t once hear the term “authentic assessment”. With that said, several of the recommendations from conference speakers support the ideals of authentic assessment in all but name.

Which Students Are Contract Cheating?

Several studies presented at the conference showed progress towards answering the difficult question regarding how many students are contract cheating, or if certain groups of students engaging in the practice can be identified.

In her opening keynote, Tracey Bretag settled on the figure of between 6% and 10% of students having contract cheated at least once. That figure remained mainly consistent across the conference. Tracey did note that there was no significant difference in the cheating figures between the universities that Australia defines as elite and non-elite. I suspect that the same would be true in the UK, even though the figures that UK universities choose to report to the media can differ substantially.

Veronika Kralikova surveyed over 1000 students in the Czech Republic and found that over 8% of them had contract cheated. Veronika also observed a gender difference, with 5% of female students saying that they had contract cheated, but 15% of male students stating this. She also found that 34% of the students said that they knew someone else who had contract cheated, suggesting that this isn’t an activity that students keep quiet about.

Several other groups of students likely to be susceptible to the temptation of contract cheating were also identified in Tracey Bretag’s presentation, with numbers based on a survey of 14,086 students on courses at universities in Australia. 814 of these students said that they had carried out one or more behaviours classified as cheating on a wider scale.

15.8% of the overall survey responders were international students, but 33.0% of the cheating group were international students.

13.1% of the overall group were engineering students, but when looking at just the cheating group, this figure rose to 24.6%.
It does need to be stressed that the cheating behaviours do not just cover contract cheating and also include areas like hiring an exam impersonator or cheating in an examination, but the overall figures do suggest that there could be issues to overcome regarding contract cheating that are specific to the engineering discipline.

The identification of engineering is interesting, as many of the listings of the subjects where most contract cheating is found, including some of my own studies, identify the areas taught in a Business School as most at risk. Business was not singled out in Tracey’s presentation. However, there is still analysis to be done. It may be that Business has been a red herring, with the contract cheating numbers appearing high simply because there are a lot of students taking the subject. It may also be that engineering numbers are bolstered in this study due to examination cheating. The full analysis will be interesting.

There may also be an overlap between the international student group and the engineering student group.

Tracey also verbally noted that the highest cheating levels seemed to be related to groupwork, with a possible overlap to engineering. Contract cheating and groupwork is an important area to consider regarding assessment design. I’ve previously suggested that well-designed groupwork can make contract cheating difficult, since this can be structured to require group complacency with contract cheating. However, I’ve also observed outsourcing requests on agency websites where students are just sending their section of a piece of groupwork to a third party. To me, that isn’t groupwork at all, it’s just standard assignments which can be completed individually.

Further, I recall a presentation at the Western Australia Forum for Contract Cheating where the presenter talked about whole groups of students agreeing to outsource their tasks as a collective. And, in that case, groups largely consisted of international students. This means that just assigning groupwork, on its own, is not a solution for contract cheating. More research into how to develop successful and authentic groupwork assignments in the age of contract cheating is needed.

What Assessments Are Susceptible To Contract Cheating?

Why students cheat and plagiarise is a long-standing question, but the answers do support types of assessment that may work better than others.

Tracey Bretag presented the results of a survey of more than 14,000 students in Australia that was used to identify which types of assessments they were most likely to outsource. The top three were: (1) assignments with a short turnaround, (2) weighted assignments and (3) continuous assessment. Hannah Sketchley, representing the National Union of Students in the UK, gave supporting results from her investigations, where “high stakes assessment” was of concern. From a practical viewpoint, I can see that, but from a pragmatic viewpoint, I also know of students who complain about overassessment when there are too many assessment points in a module. That may also support the high ranking given for the likely outsourcing of continuous assessment.

Indeed, in my presentation I discussed the growth of sites designed to complete every assignment on a course or module for a student and such sites appear highly targeted at students with lots of small assessments. It will be interesting to see what the recommendations are that will rationalise two concerns that seem to be polar opposites.

The issue of assignments with a short turnaround continues to be of concern as there is no evidence suggesting any benefits to students here. I’ve shared many examples I’ve shown of student assignments being completed by third parties in a matter of hours and Phil Newton has analysed turnaround times by individual writers to show that they can deliver work quickly (and may even like the faster turnaround times as they can charge a premium price). Phil shared an interesting observation from an essay mill that now defaults to a three-day turnaround on the site. This suggests that essay mills have decided that fast turnaround this is the best way to market their offer.

The results from Tracey’s survey were not all doom and gloom. She also identified the three factors that students said would make them least likely to outsource an assignment. These were: (1) reflections of practice, (2) viva and (3) personalised and unique. I’ve long since advocated on the increased use of vivas within higher education assessment. They are not perfect, but can work well if used in a controlled manner. The other ideas are worth considering. Many essay mills offer reflective writing, although it may be that students choose not to order this.

Personalised assignments are another good way to increase student engagement, but like the other assignment types, they are not foolproof. I’ve observed many examples of students outsourcing project reports and dissertations, getting this back a chapter at a time and returning the comments of their supervisor to their hired writer. There are whole sites that market themselves solely as dissertation and capstone project suppliers. I’ve seen lots of examples of dissertation outsourcing at MBA level and have also observed requests at PhD level. Other safeguards still need to be in place here.

Teddi Fishman suggested a possible variant on the viva which may be worth trying. In this assessment, students give a presentation based on the topics they’ve learned about in the module. The twist is that they don’t know what will be on the presentation slides until they arrive in the assessment room. If anyone does test that one out, please let me know how well it goes.

Contract Cheating and Examinations

One suggestion that is often made when contract cheating is discussed is to simply use examinations again. That may be a partial solution in some cases, but it’s not a complete solution. I was pleased to see that I was not the only person presenting on the challenges posed by examinations. This topic found its way into several other presentations.

In the survey of over 14,000 Australian students reported by Tracey Bretag, she found that 0.2% of students had got someone else take an exam for them. Of the students getting someone else to take an exam for them, only 10% has paid money. By contrast, 0.5% of students said that they had taken an exam for someone else, of which 16.7% received money. I do think that some caution needs to be applied to those figures, as many seem to use exam as an interchangeable term for assessment. If correct, the difference between these figures has to be of interest.

Tracey’s team also surveyed over 1000 academics working at Australian university. They found that 5% of staff had observed impersonation in examination, a number that is much higher than I would have anticipated and has to be of alarm.

Bob Ives presented his work in progress regarding cheating in Moldova and Romania. Both countries were said to have substantial problems with examination cheating, including through impersonation and through the use of unauthorised materials in the exam.

I was also introduced to a site I haven’t seen before, http://ipaidabribe.com, where individuals in India post about bribes they’ve (had to) make. India has often been in the news regarding exam cheating and unsurprisingly the site contains several hundred examples of bribery relating to exams, including examples of bribe payments being required to pass driving tests, engineering certifications and even to qualify as a medical doctor. It’s a site I need to explore further.

My own presentation showed several examples where people attempted to outsource their examinations, including university students and people taking professional exams. Tests taken on a computer looked to be particularly susceptible here. Students were seen using several novel ways to communicate with people outside an examination hall, including instant messenger services like WhatsApp. If communication like this is successfully happening, some changes regarding examination security are necessary.

I also discussed the availability of other technology, particularly the hidden earpieces that were found to be prominent during our SEEPPAI research and allow someone outside an examination hall to whisper answers to a student inside it. A question was raised regarding how such exam cheating technology works, so cheating devices is also an area that I feel needs to receive more widespread communication with the academic audience.

Assessment Recommendations

The same principles regarding good assessments repeated themselves in several different ways during the conference. One of the main ones has to be to stop essay writing being a main part of the requirements for an academic qualification. These are assessments that are susceptible to contract cheating and are “bread and butter” to writers for contract cheating services. As several presenters expressed in different ways, if a writer can turn out multiple essays on multiple subjects a day, then these can’t be essays that are worth writing or reading. But yet, such essays still seem to be being purchased and they still seem to pass.

Hannah Sketchley said that there was a need to co-design assessments with students. It’s not the easiest thing to get right, particularly to also comply with quality processes, external expectations and professional body requirements, but this is certainly a direction to strive towards. A similar recommendation to redesign assessment to remove high stakes components also came from Wendy Sutherland-Smith. Wendy has been attempting this but she also noted that this approach had heavy resource implications which may not prove sustainable in the long term.

Teddi Fishman summed up the challenges posed by contract cheating and assessment well at the keynote that closed the conference. Teddi advocated that “we must require our students to be active participants in their own learning”.

Page 1 of 212»