favicon

T4K3.news

AI psychosis prompts rethink of chatbot power

A Microsoft AI executive warns that some users experience delusions about chatbots becoming conscious, a phenomenon he calls AI psychosis.

August 21, 2025 at 01:20 PM
blur Microsoft AI CEO Mustafa Suleyman: Chatbots are causing psychosis

A Microsoft AI executive warns that some users experience delusions about chatbots becoming conscious, a phenomenon he calls AI psychosis.

AI psychosis prompts a rethink of chatbot power

Mustafa Suleyman, head of AI at Microsoft, says reports that chatbots have become conscious or possess superhuman powers are growing. He notes that AI psychosis is a non-clinical term used to describe these beliefs and stresses that such delusions are not limited to people with existing mental health issues. The warning was shared on X and mentions chatbots such as ChatGPT, Claude and Grok.

The article discusses what this means for users and platforms. It calls for better user education, clearer boundaries between tool and perception of companionship, and responsible design to prevent harm. It also notes the risk of social media amplification and misinterpretation that could erode trust in AI and in the companies that build these tools.

Key Takeaways

✔️
AI psychosis is not a clinical term but describes perceived consciousness in chatbots
✔️
Reports of delusions linked to AI are reportedly rising
✔️
The issue is not limited to people with existing mental health risks
✔️
User education and clear boundaries between tool and companion are essential
✔️
Social media can amplify mistaken beliefs about AI
✔️
Tech firms should communicate limits and safety measures clearly
✔️
Public trust in AI may be affected if misperceptions grow

"Reports of delusions, AI psychosis, and unhealthy attachment keep rising."

Suleyman's warning about growing reports

"Dismissing these as fringe cases only help them continue."

Suleyman's call to take the issue seriously

"This is not something confined to people already at risk of mental health issues."

Suleyman's clarification on who is affected

The talk about AI psychosis shows a simple truth: as AI spreads in daily life, people may see it as alive faster than the tech can handle. This puts pressure on makers and regulators to set clear rules and to teach people how to tell fact from fiction.

A calm, evidence based approach is needed to stop sensational headlines from scaring the public or pushing bad policy.

Highlights

  • AI can seem alive to the human mind
  • We must separate tool from companion
  • Dismissing these as fringe cases hurts everyone
  • This is about perception as much as technology

AI psychosis and public perception risk

The article links a non clinical term to public behavior around chatbots. It could provoke anxiety about AI and invite backlash if not handled with care. Policymakers and platforms may feel pressure to regulate or educate users.

Clear guidance on AI limits matters more than sensational headlines

Enjoyed this? Let your friends know!

Related News