On the use of AI by students in HE, or, how AI will transform education
I should say off the bat, that I am ‘pro AI’, whatever that means. I use both Claude (my favourite as it is a terrible flatterer) and ChatGPT, although not yet enough to get a subscription to either. This underuse is due to the work that I do, as well as the limitations of my own imagination in respect of how to fully use AI to augment and enhance my life. Fundamentally, I think we all need to get adept at integrating AI into our lives because we are on a precipice, and those who do not understand how to incorporate AI will get left behind.
What follows are some thoughts on how AI is currently impacting certain aspects of higher education in the UK, and how it could fundamentally transform HE altogether. Namely, coursework assessment at present, and the structure and function of education in the future.
Students’ use of AI
My thoughts on students’ use of AI are informed by observing the debates about this in higher education in the UK, but it goes for school and college students as well. What I say below is predominately about the theoretical subjects and the theoretical aspects of practical subjects, as AI clearly cannot do practical elements for the student at present.
Academics rightly observe that many students are going to use AI in the course of their studies. Academics also rightly observe that they can’t stop students using AI. As such, some academics are now “allowing” the use of AI by students, which includes in essays and other coursework. Some are asking that the student declares what prompt they used in the references section. These academics are otherwise marking the coursework traditionally and as if it were the product of the student alone. This, I feel, is a mistake as it is unfair to previous generations of students, does not reflect the reality of the situation (a perennial academic problem), and serves only to hasten the demise of the academy by hollowing out the meaning and value of any given degree.
Who or what is being assessed?
By continuing with pre-AI assessment methods, academics are effectively eviscerating the value of a degree. At its core, the purpose of assessment is for the student to demonstrate they have learnt something of the subject matter being taught. As such, with AI produced and/or augmented assessment responses, who or what is being assessed? To a greater and lesser degree, it is AI which is being assessed in a direct fashion, and the student’s ability to construct prompts in an indirect fashion. This means two things: 1) individual students will not have demonstrated their command and knowledge of any particular subject or discipline, and 2) students will not have received actual training in the thing which they are learning how to do – prompt engineering. Nor are they being taught how to critically evaluate AI responses.
The plagiarism question
Plagiarism is the cardinal sin in academia, as the entire industry is built on references to, and acknowledgments of, one’s peers and betters. The absolute prohibition on passing off someone else’s words and ideas as one’s own sits in stark juxtaposition to the permissive stance on AI. These large language models are, like humans, outputting information they have consumed. Even if the training data were all given consensually (which they were not), and even if there were no copyright issues (which there are), the outputs provided by any given model are, in essence, just a soup of unreferenced source material, and in this way, a de facto form of plagiarism. To think of any given model’s output as somehow unique and/or original belies the truth of what comprises the model.
Some short- and long-term solutions
In the short-term, I believe the solution to the use of AI by students is to change assessment methods to closed book examinations, whether that be oral or written. This way, the student will still have to ingest and assimilate their knowledge and analysis, whether that be taken directly from AI or from AI plus the traditional corpus of academic knowledge (lecturers, books, articles, primary sources, etc.). They will have to commit something of the subject/discipline to memory, which is an expectation of the bearer of a degree in any given subject.
In the long-term, does any individual human need to demonstrate their knowledge and understanding of a subject in the way we once did? I think not. In this way, I see the death of academia in its current form. Rather than becoming knowledgeable about any particular subject or area, my intuition is that, intellectually speaking, humans will need to become generalists and learn how to marry up vast webs of information in creative and generative ways. Currently creative and imaginative inquiry is what underpins the degree; in the future, I imagine that the focus of education will be on actively learning how to develop fruitful inquiry, with the subject matter the inquiry is expressed through simply a reflection of the student’s interests.
The creative and imaginative future of education
As such, I can see a future where all students are actively taught how to write clear and direct prompts and how to engage with AI in fruitful ways. Alongside this, perhaps the assessments will be of the student’s ingenuity in formulating productive prompts, or in laterally or intuitively connecting AI mediated information across multiple subject areas. At the end of the day, what is the value in knowing any particular fact or theory, when all are known by AI and accessible instantly? In this way, I do not think we will pursue degrees in this or that subject. Rather, the value and skill imparted by education will be in teaching students how to join up discrete pieces of information, and to solve problems in new and innovative ways. In this way, I see rather less formal education overall, and instead a continual engagement with prompt refinement and intuitive and creative questioning. A continuous working through of the detail to get to some imagined future. Some will pursue this more than others, just as currently some leave school at the earliest opportunity, and others go on to pursue a degree.
In this same future, will there be academics? In a way, no; in a way, yes. I see an invert of the current academic. Rather than a specialist in any particular narrow field of study, the new academic will be a mercurial figure with a wide range of interests whose competing passions drives them to bring together disparate ideas and information in new and creative ways. Will these people be cloistered in a university? Possibly yes until all books have been consumed by AI and the compute power for the largest of models becomes available to home computers; but after that, of what use the university when all we need is our curiosity and access to AI? This is, of course, also why AI needs to remain open source, decentralised, and outside the control of governments and authorities.
The past few decades have seen the denigration of first, the technical and practical skills (‘the trades’), and second, the arts and humanities, with a concomitant and exponential emphasis on STEM subjects. In the future, I see a reversal of this. Already, the well-paid and coveted jobs of programmers and data scientists are being replaced by AI. In the future, I think the creativity embodied in the arts will be the driving force behind a technological revolution I can only barely conceive of, such is its magnitude. In the near future, manual dexterity and skill in the production of physical things will be revalued and rewarded, that is until the robots get nimble enough! Even still, I believe there will always be value in human physical labour, the way hand-made haute couture clothes, for example, are valued over and above factory produced garments at present.
Humans, I think, are broad, vast, creative, and complex entities and this is why so much of twenty-first century life chafes. We are confined into narrow and rigid boxes, professionally and personally. I see a hopeful, expansive future opened up by AI which allows for our beings to spread and explore. Who amongst us is not curious about something, after all? In the best kind of future, we all have a personalised AI assistant to compliment our abilities and who frees us to explore our interests more fully.
So, the transformed education I dream of is a dispersed one. It is an erratic, decentralised, and irreverent place constructed and driven by lateral and intuitive thinking. A place in which human play and creativity are empowered and emboldened by AI to try and fail, to try and win, over and over again, and all in approximately 5 seconds flat. After a couple of hours of productive and creative work, we’ll all go out for a nice walk of course, to remind ourselves of our fundamental embodiment and to generate more ideas to explore with our computer friends.
Comparative addendum: human vs AI
It occurred to me that it might be interesting to see what Claude and ChatGPT come up with in response to the title of this article. I ran the request twice for each model and found the differences illustrative of my point about prompt engineering. I have not included their responses here as you can just run the experiment yourself.
The first prompt was “Please write me a maximum of 1000 word article using the following title: ‘On the use of AI by students in HE, or, how AI will transform education’. Please cover both clauses of the sentence title in the article.” Both Claude and ChatGPT’s response was, to my mind, fairly pedestrian and lacked vision. They focussed on the detail of the present moment, which is important and perhaps nicely supplements my broad-brush backstory above, but I felt slightly bored by both of their responses. That said, each gave me a perfectly adequate response to the article title.
In the second prompt, I asked them to use the same subheadings as my article to see if I could steer an output closer to what I wrote and was thinking of. I was pleasantly surprised to find Claude coming a little closer, although ChatGPT stayed dry and boring. In general, Claude’s output has a much warmer feel to it and its language seems more ‘human’ than ChatGPT.
In my opinion, the main difference between what I wrote and what both AI generated, is around the possibility of the unknown. My brain is free to imagine something other than what education currently is, whereas Claude and ChatGPT’s ‘vision’ of the future still imagines education existing in a version of its current form. My brain can demolish the present and construct an entirely imagined and novel future, whereas the future AI imagines is constrained and restrained by what currently is. It doesn’t yet dare to dream not does its ideas have any really liberating potential. I suspect this is a result of the current parameters of its programming, but again shows why, from the student’s perspective, it is unwise to rely on AI to generate your coursework. Unless you are happy being a dry and somewhat unimaginative student that is. No disrespect to Claude intended.