The technology behind Artificial Intelligence (AI) systems for writing continues to advance at a rapid rate. The launch of ChatGPT in late 2022 saw another spurt of interest in how students could use AI technology to breach academic integrity. The provision of a chatbot style interface, with a memory of requests and the opportunity to make incremental requirements, put the opportunity for academic misconduct in the hands of more students than ever before.
There is little published research available yet to show if students are moving to AI tools (and misusing them). But ultimately, a question has to be asked about the ethics behind the student use of ChatGPT and other similar systems. With wider world use of AI technology, should the use of AI be considered academic misconduct, or simply as a helpful tool for learning and assessment? This blog post will provide an initial exploration into the complex and controversial issues surrounding AI generated text and academic integrity, and consider how the academic community can adapt to this rapidly evolving landscape.
The Ethics of AI in Academia
The use of AI generated text in academia raises a number of ethical questions and concerns. On the one hand, the use of AI can undermine the learning outcomes of assessments and reduces the value of a student’s degree. If a student is not actively engaged in the learning process and simply relies on AI to complete their work, they may not gain a deep understanding of the subject matter. They may be completely unprepared for subsequent assignments. Additionally, the use of AI may allow students to cheat or plagiarise without being detected. This goes against the principles of academic integrity.
On the other hand, the use of AI can bring with it benefits as a valuable tool for learning and assessment, just so long as it is used appropriately and in accordance with guidelines. The problem is that educational institutions largely do not have such guidelines prepared. AI technology, such as ChatGPT, has the potential to improve efficiency and accuracy, and can provide students with new and innovative ways to demonstrate their knowledge and skills. However, it is important to carefully consider the ethical implications of using AI and ensure that it is not being used to unfairly advantage some students over others.
The Use of AI in Essays and Assignments
The use of AI to write essays and complete assignments is a controversial topic. There are many arguments about the quality of AI generated work. Quite often, this depends quite simply on the skills of the user. There is a skill to generating high-quality AI output that meets assessment requirements, known as prompt engineering. Prompt engineering involves carefully crafting prompts and inputs to guide the AI’s output in the desired direction, and this can be a useful tool for generating AI text. But being an effective prompt engineer is not the same as understanding and engaging with the subject matter on a deeper level, which is often the primary goal of assessments.
While it is possible to get excellent results using AI, especially for standard tasks, students need to be able to check for accuracy. People who have a deep understanding of the subject matter will likely have an advantage in refining and improving the AI’s output, while those who are less familiar with the topic may struggle to spot errors or inaccuracies.
I’ve tested out many of the emerging AI text generation technologies, most specifically ChatGPT. Although ChatGPT is primarily intended to complete text, I’ve found ways to generate other types of assessments, including completing computer programming assignments, adding tutorials, producing PowerPoint presentations with scripts, and even writing research papers. The release of ChatGPT has lowered the barrier of entry for many students. Here are some examples pulled from my Twitter account.
I got #ChatGPT to build a new website for staff (faculty) about #academicintegrity (1/2) pic.twitter.com/e0FDMlol9Y
— Thomas Lancaster (@DrLancaster) December 5, 2022
Automatically generating PowerPoint slides with #ChatGPT. It is possible, if you know how to run generated code Python. #academicintegrity #GPT3 #artificialintelligence (1/5) pic.twitter.com/VyNGqP4nGF
— Thomas Lancaster (@DrLancaster) December 4, 2022
It is Time to Adapt
As educators, it is important that we adapt quickly to the challenges and opportunities presented by AI technology. While there are valid concerns about the ethics of AI generated text, AI is here to stay and will likely continue to play a significant role in academia. Rather than trying to battle against it, we need to find ways to work with it and to define what is acceptable for students to do. This may involve revising policies and guidelines, providing education and resources for students and instructors, and finding ways to assess student learning that go beyond simply checking for the use of AI.
One potential solution is to focus on incorporating more practical and hands-on learning experiences that cannot be easily outsourced to AI. This could include lab experiments, field trips, group projects, and other interactive learning activities that require students to actively engage with the material. By shifting the focus of assessments away from traditional written assignments and towards more experiential learning opportunities, it may be possible to better evaluate student understanding and skills.
Much of my research has focused on the problem of contract cheating, where students outsource their assessed work to a third party. If students use ChatGPT and AI technology, are they outsourcing their work? They are generating original text, usually never seen before, which could give them an unfair advantage if misused and undeclared. From my experience, it is certainly possible to generate text on a par with that from a contract cheating provider. To me, that means if a student could contract cheating and pass, they could use a different set of skills to employ AI writing technology and pass.
We need to adapt. We need to work with students and find ways to help them to engage with learning. We need to evaluate the benefits that AI can bring to education. Ultimately, we may have to rewrite our definitions of academic integrity to give us all the flexibility to do the right thing for students in an ever-developing world.
Pingback: Blogging With AI | Thomas Lancaster
Thanks. I have been playing around with Chat GPT a bit lately and find it quite scary. Most often, there are very substantial errors in the factual content that usually make intuitive sense and are easy to overlook if not a subject expert. One concern is that this enables the amplification and dissemination of errors. In relation to assessment, the old system of viva voce exams might also be considered, although the challenges associated with bias elimination in assessment are not trivial.
Pingback: Cheating with Artificial Intelligence – Addressing The Consequences – Thomas Lancaster's Blog
Viva is excellent in my experience but very labour intensive. Suggest solution is mix of viva, traditional exams and assessed live activities (assessment centre style).
Pingback: Exploring Artificial Intelligence Alongside Undergraduate Students On An Academic Integrity Research Module – Thomas Lancaster's Blog