Microsoft boss Mustafa Suleyman fears rise in ‘AI psychosis’

Microsoft’s AI chief Mustafa Suleyman has expressed concern about a rise in reports of people suffering from “AI psychosis”.

Microsoft boss Mustafa Suleyman is concerned by a rise in reports of “AI psychosis”.

The tech giant’s head of artificial intelligence (AI) has written a series of X posts explaining how “seemingly conscious AI” – AI tools which appear to be sentient – keep him “awake at night” and says they have a societal impact despite the tech not being human.

Suleyman wrote: “There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality.”

“AI psychosis” is a non-clinical term describing incidents where people are increasingly reliant on AI chatbots such as ChatGPT and Grok and become convinced that something imaginary has become real.

Examples include forming a romantic relationship with the AI tool or reaching the conclusion that they have god-like superpowers.

Suleyman has called for tech giants to take a more guarded approach to the matter.

He wrote: “Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn’t either.”

AI academic Dr. Susan Shelmerdine believes that doctors will one day start asking patients how much they use artificial intelligence in the same way that they enquire about smoking and drinking habits.

She said: “We already know what ultra-processed foods can do to the body and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds.”

Professor Andrew McStay, an expert in technology and society at Bangor University in Wales, has written a book called Automating Empathy and believes that humanity is only “at the start” of this issue.

He said: “We’re just at the start of all this.

“If we think of these types of systems as a new form of social media – as social AI, we can begin to think about the potential scale of all this. A small percentage of a massive number of users can still represent a large and unacceptable number.”

Professor McStay’s team conducted a study of more than 2,000 people on various questions about AI and 49 per cent found that the use of voice was appropriate to make the tech sound human and engaging.

The expert said: “While these things are convincing, they are not real.

“They do not feel, they do not understand, they cannot love, they have never felt pain, they haven’t been embarrassed, and while they can sound like they have, it’s only family, friends and trusted others who have. Be sure to talk to these real people.”

Close Bitnami banner
Bitnami