In addition to the risks, Robert Lepenies, President of the Karlshochschule International University in Karlsruhe, sees the opportunities of the new voice AI ChatGPT in the university environment.
Editor: Mr. Lepenies, the chatbot ChatGPT formulates surprisingly clever texts. So clever that many of her students no longer have to do a seminar paper themselves. Does that worry you?
Robert Lepenies: No, I find the development extremely exciting. My first impression of the freely usable chatbot was so overwhelming that on the same day I called the entire administration of our university and a day later all the students together to discuss the impact of the development on all of us.
And?
The tool enables a glimpse into the future, which shows what is possible as a digitized university, in which texts can be created very quickly, some of which have an academic standard. I am convinced that the use of this language model will one day become as normal as the spell checker in Word.
But for what purpose? So will the students then let the chatbot create their term paper?
Why not? AI will influence scientific work like other technologies before it. It is important that the core of the old cultural techniques of reading, writing, and thinking are preserved – just under different conditions. How, we have to find out together – with the machines.
On Twitter, you painted a comparatively bleak picture of AI being used carelessly and without concepts. What exactly are you afraid of?
I’m not so worried about ourselves: We are a small university where we know all the students and can conduct very interactive and lively teaching in small seminars. This limits the potential for abuse.
Why this? Because they can better assess the individual services?
Yes, we can talk personally about precisely these human-machine interactions – and argue. The concern comes more from colleagues at larger universities who suddenly have several hundred papers on the table – and for which ordinary plagiarism software is no longer sufficient.
What fears have your colleagues expressed?
There are many opportunities for abuse: it’s easier to pretend to be scientific; it is easier to pretend knowledge itself. The machine does not do the thinking; in the worst case, it can pretend it is. The self-confidence of the language model is also unscientific: A good scientist communicates uncertainties and his own errors much better than any AI that I know of.
Some of their colleagues already see a “complete idiocy” because students no longer have to learn anything. How real is this danger?
I think: The reactions to this tool also show the human image that we as universities have of the students as a whole. A lot is projected onto the machine.
What exactly do you mean?
students want to learn. Anyone who sees such a new technology and immediately thinks of cheating does not take students seriously as learners – and for them, exams and proof of performance are more important than the question of why we actually write texts and have them written.
Robert Lepenies, who holds a doctorate in political science, is a Professor of Plural and Heterodox Economics at the Karlshochschule International University in Karlsruhe. In October 2022, the 38-year-old was appointed the new president of his university.
How can students be prevented from abusing the tool?
Many would probably answer: With bans or a return to the old forms of testing with pen and paper – that may even be possible in some places. But I would strongly advocate installing the tool in exams: For example, by having an exam written with the chatbot, but adding a kind of reflection dimension in which students reason about what the AI has done with them and how the AI can help them to further think were encouraged. We have to live with the technology – and shouldn’t engage in defensive battles.
What could cooperation with the chatbot look like?
In the best case, such an AI is a creative comrade-in-arms that you take with you – I would like to see more discussions about this. We should ask ourselves: What do we actually want to achieve as a university? I firmly believe that students who are interested in learning can benefit enormously from such an AI tool.
How come?
Today, for example, I gave a group the task of juxtaposing three different texts created by AI on the same research question and explaining to myself where which text meets scientific standards – and where not. Here you can learn to weigh up scientific evidence. However, far too little is currently being said about this. Actually, the fear of cheating outweighs. But who takes it seriously that the majority of students go to university to learn something and use the tool to their advantage? I want to strengthen these voices.
But shouldn’t universities evaluate the performance of their students differently in the future?
Housework – as we know it up to now – no longer works, especially not at the big universities. In such a standardized system that is purely optimized for output, you can easily cheat with the software. However, the tool thus reveals what is fundamentally wrong in the scientific community.
And what relief do you expect from GPT -3?
The tool can take over routine work – such as module descriptions and other internal documents from universities, which can now be easily revised. And teachers can, for example, use the tool to rework teaching documents from one course for another course with little effort, because the AI is also capable of learning to a certain extent. The horror scenario is always: An AI dictates content to me and is determined by others – no, I can manage and control it myself. But that requires a certain skill.