Fear of AI
Managers should take fear of AI seriously. Here's how to reduce it.

AI programs like ChatGPT trigger fears in many people. An AI expert and psychologist explains how managers should deal with AI fears. And how they can prevent themselves from unintentionally fueling fears.

Artificial intelligence opens up numerous opportunities for companies to redesign work and processes. But the new technology does not only bring enthusiasm. Employees can also become afraid of AI: for example, the fear of being left behind technically and not being able to cope with the tools. Or the fear that their own job will soon be taken over by AI.

Fear of AI: An expert explains

It is normal for some employees to be afraid of AI or to be skeptical about the use of AI tools. Managers must recognize these fears and take them seriously. In an interview, psychologist Katharina Weitz explains how these fears arise and how managers can best deal with them.

Ms. Weitz, you are a psychologist and computer scientist – how do you explain that many people are afraid of artificial intelligence and programs like ChatGPT?
Katharina Weitz: ChatGPT marks a major turning point. The program shows that artificial intelligence is no longer a niche technology that only universities or tech companies deal with. It is not specialized software developed for a company to automate production processes, for example. It is something that will penetrate our everyday lives and influence the lives of many people. ChatGPT is incredibly easy to use: anyone can use it, young and old alike. That makes it a topic of conversation at the breakfast table. And it makes it clear that ChatGPT will definitely change society.

So the fact that ChatGPT is so simple fuels fear of AI?
In a way, yes. It’s important to know that whenever there is a new technology, it is always accompanied by skepticism. And that is basically healthy – because skepticism means that people don’t say yes and amen to everything, but think about the consequences. In connection with artificial intelligence, this is often accompanied by the fear of losing control.

So, to put it bluntly, the fear that intelligent machines could take over the world?
Exactly. Some people have the feeling that our world is becoming a little like that of science fiction films, in which autonomous, self-thinking machines develop their own values ​​and needs that do not correspond to ours. In a way, this fear is fueled by programs like ChatGPT: everyone can see that AI can now generate elaborate-sounding answers – something that seemed impossible for most people just a few months ago. It can make you afraid of what AI can still do and where it will lead. But that’s not because of AI itself, but because people aren’t paying enough attention to it.

What do you mean?
Many people think that an AI program like ChatGPT learns independently, that it develops on its own, can acquire knowledge, and think creatively. But that’s not the case: AI has to be trained – and can only draw on the pool of knowledge that it has been fed. Anyone who doesn’t know this because they don’t deal with it is greatly overestimating AI – and can then of course quickly become afraid of the influence it will have on their own life, especially in a professional context. And ask themselves questions like: Will AI make my job redundant? Or: Am I competent enough to keep my job when AI comes? Will I still be able to keep up?

How can I, as a manager, tell that team members are afraid of artificial intelligence?
By the way they deal with the topic. It is important to be able to distinguish between healthy skepticism and a rather unpleasant fear.

How do people react when they are not just skeptical but really afraid?
There is a pattern in psychology of how people react to stress, for example when they are afraid – fight, flight or freeze. In other words: people fight, flee or play dead. Such reactions can also be shown by people who are afraid of AI. Those who fight against it often proactively reject the technology. They ask questions like: Why do we need this – everything is working fine as it is? Or they refuse to use the programs, sometimes even unplugging machines that work with AI.

On the other hand, anyone who flees or plays dead is withdrawing. They are trying to avoid contact with AI. And, if it cannot be avoided, they may appear desperate.

Is there another sign that shows that someone is afraid of AI?
If a person becomes very emotional, it is often a sign of fear. I often notice this myself in conversations after a lecture. For example, a woman came up to me and asked me about the open letter in which celebrities such as Elon Musk had called for a six-month pause in the development of artificial intelligence. She said: “So many famous people have signed this letter – does that mean that AI is dangerous and can take over everything?” This person was clearly afraid.

Even if the rejection of the technology is extreme, it usually indicates fear of AI.

In contrast to people who are just skeptical?
Exactly. Skeptical people are more likely to engage in dialogue. If you, as a boss, were to ask people what they want and what they don’t want, where AI can help them and where it definitely can’t – then skeptical people would be able to formulate that. They would express concerns, perhaps many, but at the same time show a certain pragmatism: They would consider what the opportunities are, what the risks are. And they would be able to deal with the topic in depth. People who act out of fear that AI could overwhelm them or take over their work are hardly able to do something like that.

How should managers react if they notice a fear of AI in their team?
They should definitely address the fears. The motto in management work is: disruptions take priority – and fears are one such disruption. If you ignore them, they won’t go away. Instead, they will simmer away and at some point, they will come to the fore in an unpleasant way.

How, for example?
For example, when it becomes clear that some people are simply not using the new AI systems. I remember a story that a manager recently told me. In a printing company, an AI was supposed to ensure that the printer ultimately produced the best possible print image – the employees only had to check whether this had worked. But because one employee did not trust the AI, he repeatedly intervened in the process beforehand. So it took forever for the machine to be calibrated. And the result was not perfect either.

This thinking: “I can do better than that. Because if it isn’t like that, I’ll soon lose my job” is more common. If managers don’t address something like this, it not only puts a strain on employees – it can also disrupt operations in the long term and damage the company.

As a manager, how do I best address fears, for example when talking about ChatGPT?
By seeking open communication with the whole team. In other words, creating an opportunity to address fears on your own initiative. As a boss, you could say, “Hey, ChatGPT is on everyone’s lips at the moment, I’ve been thinking about it too. Let’s talk about it – I’m interested in what you think.” Or you can have employees who already have expertise give a presentation to the whole team – followed by a question and answer session.

The lower the threshold for exchange, the greater the likelihood that people will express their concerns about the issue. And also fears, such as the fear of losing their job.

But that can only be the first step, right?
Sure. Apart from such an initial exchange, you should get people to engage with AI in a practical way. Because what we don’t know often scares us the most. Conversely, fear often subsides when we get closer to what scares us. That’s why I always recommend a workshop as a second step, where everyone can try out ChatGPT, for example – and where we can think together about whether and how AI could be used in the company.

Free online courses are also a good idea. Some employees prefer to learn in a more private, protected environment. It is important to make enough working time available for online courses too. People need enough space to get to grips with the topic properly. Otherwise, they are very unlikely to do it. After all, nobody completes a first aid course in their free time just because the company needs first aiders.

Why is it so important to master the technology – isn’t fear a feeling?
Of course. But the feeling will be particularly strong if there is also the impression of having no control over the technology. We will get this control back when we understand, for example, what AI can and cannot do. If you realise that ChatGPT, for example, cannot think strategically and does not have company-specific knowledge at your disposal – unless it has been trained with it – you will quickly move from the fear of losing your job to a realistic assessment of the situation. Something like this: “OK, ChatGPT may soon be able to take over aspect X and aspect Y of my job. But certainly not aspect Z! How can I now continue my training so that I can make an even more meaningful contribution in area Z?”

What do I do as a manager if I notice that someone is still afraid of AI?
Then it’s time for a one-on-one conversation. If, for example, a person refuses to use AI, then you should clearly communicate your expectations in this conversation and make it clear that there will be consequences if it doesn’t happen. But that no longer has anything to do with AI itself; it’s a management issue. Because at the end of the day, AI is just a new technology, like new accounting software, for example. And if you’ve decided that it will be used, then every team member should implement it.

How can I avoid unintentionally fueling or increasing fears myself?
By not simply putting the technology in front of your team – for example, because you yourself are so enthusiastic about it and have identified ten possible uses for ChatGPT in five minutes. A “we’re all going to do it now because it’s so cool” approach quickly overwhelms employees, especially those who are skeptical or even afraid of AI.

Let’s say that as a boss I find AI and ChatGPT totally scary. I shouldn’t say that either, right?
You shouldn’t express it so extremely, no. But you can certainly convey concerns and skepticism objectively – for many employees it is probably even a relief to realize, hey, the boss feels the same way as I do. You just have to make it clear that you are still dealing with the topic of AI. As a boss, you are the role model: If you are a bit skeptical about the technology but try it out anyway, then this openness will be passed on to your employees. And some may think: “If the boss still uses ChatGPT, then she might as well give it a try.”

So would it be better to be a little skeptical than overly enthusiastic?
You can’t say that. The main thing is: get your team members on board. With accounting software, for example, you don’t say: “Personally, I think this program is great, you’ll be using it from Monday – now go ahead.” Instead, you ask what your employees would need from such software, where exactly it should be used, what it should be able to do – before you buy it. And once you’ve made your decision, train your team members and oversee the introduction of the software over a few months.

It is no different with AI: Here too, you should ask your employees what they think, where they see potential for using ChatGPT, for example, and where they see difficulties. The latter is particularly important for very enthusiastic managers: in their euphoria, they tend to overlook potential problems.

Do you have one last tip to get the team involved?
Try to bring a little lightness to the topic, make it entertaining. This way, it is easier to ground everyone a little. So make it clear: AI programs like ChatGPT are a tool – not a system that threatens the lives of individuals. And what’s more, if you’re laughing, you can’t be afraid at the same time.

How do you make the topic light-hearted?
I always start my ChatGPT presentations by asking the AI ​​to write a poem about a sandwich – in the style of Goethe. This is usually very well received because the result is naturally funny.

The expert

Katharina Weitz

Katharina Weitz is an educator, psychologist, and computer scientist. Since 2018, she has been researching at the University of Augsburg on how to make artificial intelligence understandable for people – and imparts knowledge about computer science and AI in workshops, lectures, videos, and books.

LEAVE A REPLY

Please enter your comment!
Please enter your name here