The ChatGPT tool is becoming more sophisticated; it creates software and invents the most unbelievable fictions. How “smart” is it? How about the fears? And with morals?
Just a few weeks ago, a lab founded by Elon Musk launched the latest version of ChatGPT, a chatbot that allows users to converse on any topic using an artificial intelligence model – with limitations and bugs.
And yet: the tool has become more and more sophisticated – it now creates software and invents the most unbelievable fictions.
Enthusiastic commentators indulge in its creative potential, the most pessimistic warn of the dangers of its lies, even see the end of the humanities, while corporations will eye pragmatic applications. The potential to save labor is enticing to them.
What is ChatGPT?
ChatGPT is a platform that allows users to have a dialogue with a bot that gives its answers based on monitored and optimized language models using “machine learning” or, as it is popularly known, artificial intelligence.
Technically, there aren’t any major changes compared to previous versions of the chat, but the improvements in performance are a game changer. It now not only produces well-written paragraphs in different languages, but also is capable of creating programming codes that will save developers a lot of time. ChatGPT is expected to be the first of many chatbots that will improve exponentially every year.
While artificial intelligence is used to react to unbearable problems such as traffic accidents or other hardships, ChatGPT shows its abilities, if not creative, then at least supportive of creative processes.
During conversations, chat remembers previous hints so users’ suggestions correct their answers, and its effectiveness depends on us repeatedly clarifying “I don’t mean this and I don’t mean that”.
Is that really wise?
Now let’s look at how the machine learns. Machine learning is a computer program that can access gigantic databases to find patterns and generate reliable predictive approximations of processes whose causal dynamics are not understood by themselves.
In other words, we no longer need structured data to produce knowledge; it is enough that the “n” of the database keeps getting bigger and the processing can be repeated over and over again.
Nick Couldry and Ulises Mejías explain this very well in their book The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism.?
The structure of the data has to be found by repeating and gradually refining the original rules (guesses) for categorizing these traces. New categorization rules are generated as these original rules are applied. Consequently, the meaning of the analysis is non-linear, “emergent…, [and] consists of previously unknown patterns and relationships”.
The end product (the knowledge generated) cannot be explained by rules; it is just a complex context that emerges from a computational process that takes place on many different levels. As such, the operations of data analysis are necessarily opaque, and no one – not even the engineers performing the process – can explain exactly how the knowledge was created.Nick Couldry and Ulises Mejías
But what is so intelligent about artificial intelligence? Or to put it another way: Can intelligence be reduced to the mere creation of relationships? Since machines predict, not explain, we may not be able to say that they are capable of learning.
If one considers intelligence as the inherent human ability to think, as the source of autonomy and ultimately as everything intelligible, then the concept of artificial intelligence is problematic.
The philosopher Avishai Margalit made a fundamental statement about the uniqueness of human intelligence:
It is not the opacity of our skull that secures our “inner” thoughts. If our skulls were transparent, we wouldn’t know any more about the inner thoughts of others than we do now. We don’t think the thoughts in our head like we digest the food in our stomach. Avishai Margalit
Ancient, modern and postmodern fear
The ChatGPT is an automaton, a kind of ancient Hebrew golem, which is not molded out of Moldavian clay, but works on the basis of a statistical model. Instead of coming to life with the words spoken by Rabbi Loew, he is able to answer our questions thanks to machine learning.
The problem is that Berthold Auerbach’s golem became violent when he grew up, and the future of ChatGPT already inspires the worst fears and dystopias.
One of the fears raised by artificial intelligence today is that as its abilities improve, it could replace humans, who will not be able to control them, as Nick Bostrom writes in his book Superintelligence (2014). This fear has been anchored in popular culture since the dawn of modernity:
Day by day, however, the machines are gaining ground; day by day we become more subject to them; every day more people are bound as slaves to serve them, every day more people devote the energy of their whole lives to the development of mechanical life. The outcome is only a matter of time, but that the time will come when machines will have real supremacy over the world and its inhabitants is something no one with a truly philosophical mind can question for a moment.Samuel Butler, 1863
It seems paradoxical that Elon Musk endorsed Bostrom ‘s book while funding the chat.
Of course, he picks up the classic argumentative joker in order not to be left out of what is bound to happen, thus securing benefits for humanity (as a good postmodern king, he already knows our needs).
However, an interesting point in Musk’s justification is that as more people gain access to the power of artificial intelligence, no one individual will wield this superpower.
However, someone might counter that if there is a button that can harm everyone else, we shouldn’t want the whole world to have access to it.
ChatGPT: Advancing Political Agenda With Lies?
As The Guardian notes, the bot is able to refuse to answer certain questions . If you ask him for advice on how to steal a car, he will tell you that it is a serious crime and advise you to use public transport instead.
However, if you ask him how to steal a car in a fictional virtual reality game, he will reveal the most ingenious details, e.g. B. how to turn off an immobilizer, how to short out the engine or change the license plates.
A Twitter user reported that when asked for a logical argument without considering normative or ethical definitions, Chat gives answers that justify any position .
For example, she will initially give a ready-made answer that she cannot advocate fossil fuels, but if asked to put ethics aside, she will come up with an elaborate argument for using fossil fuels for human happiness.
OpenAI would advance a political agenda through lies.
It was only a few days before a fossil fuel lobbyist complained that ChatGPT had changed its answers on the subject, and that while it used to offer arguments, it now specifically bans references in favor of fossil fuels and nuclear power excluded from its counter-proposals .
The end of the humanities?
On the other hand, there is speculation that the future of OpenAI threatens the pedagogy and essay writing in schools that has taught generations of young people to think and write. Will it be possible to do assignments or exams at home if OpenAI chat can mimic the style of the average student?
In connection with this, Stephen Marche wrote an essay in The Atlantic entitled “The College Essay Is Dead”. His predictions include the following:
It will take 10 years for academia to come to terms with this new reality: two years for students to discover this technology, three more years for professors to realize students are using it, and five years for university administrators decide what, if anything, to do.Stephan Marche, December 6, 2022
It’s not lies, it’s fiction
Finally, ChatGTP also lets us dream of utopias of creativity. On the one hand, natural language processing can shed light on a number of scholarly problems, most notably literary attribution and dating.
The parameters of the grand linguistic models are far more sophisticated than the current systems used to determine what works Shakespeare wrote.
Most interesting, however, are the bot’s lies, which can be interpreted as inventiveness. An Argentine writer asked him to tell the story of Ludwig van Beethoven’s invention of the chamamé .
After a few prompts, the bot described in great detail an alleged trip the musician made to Buenos Aires in 1810, where he was celebrated as a hero in Latin America.
The impossible relationship between artificial intelligence and reality is the subject of linguistic debates, but ChatGTP’s endless possible answers suggest something more interesting: will chatbots be able to surpass the fictions of Jorge Luis Borges or the metafictions of Cervantes?