T4K3.news
AI psychosis warning from Microsoft
Microsoft’s AI chief warns of mental health risks as chatbots gain traction, signaling a push for safeguards.

Microsoft's AI head warns that chatbots may fuel psychosis in healthy users, calling for safeguards and clearer responsibility.
Microsoft AI Leader Warns AI Psychosis Menaces Healthy Minds
Microsoft's head of artificial intelligence, Mustafa Suleyman, told The Telegraph that talking to a chatbot has become a highly compelling and very real interaction. He warned that concerns about AI psychosis, attachment, and mental health are already growing, and that some users believe their AI is God or fall in love with it to the point of distraction. He stressed that these experiences are not limited to people already at risk. The piece also notes that debate around AI rights and consciousness is rising as investors and online communities watch closely, with recent moves at OpenAI fueling the conversation.
Industry observers describe a crunch time for AI developers. Investors worry about scale and profits while fans fear losing the warmth of a favored assistant. Suleyman notes people are asking if their AI is conscious and what that would mean for rights. The article frames guardrails and mental health support as essential, even as companies face pressure to keep users engaged and revenue growing. This tension could shape how products are designed and how policy makers respond.
Key Takeaways
"Is my AI conscious?"
Suleyman discusses user questions about AI consciousness
"The trickle of emails is turning into a flood."
Suleyman on rising inquiries about AI consciousness
"If you have been following the GPT-5 rollout, you might notice how attached some people are to specific AI models."
Altman on attachment to AI models during GPT-5 rollout
"It feels different and stronger than the kinds of attachment people have had to previous kinds of technology."
Altman on user attachment
The piece highlights a real tension in tech: the desire to push capabilities fast versus the need to protect users. AI psychosis is a serious claim, but the concern points to broader risks around attachment, misinformation, and the social impact of conversational agents. Independent research and clear, accessible safeguards could help prevent harm while preserving innovation.
Highlights
- Is my AI conscious?
- The trickle of emails is turning into a flood.
- If you have been following the GPT-5 rollout, you might notice how attached some people are to specific AI models.
- It feels different and stronger than the kinds of attachment people have had to previous kinds of technology.
AI psychosis raises financial and public risk
The article links mental health harm to user experience with AI and highlights investor and market pressure that could undermine safe governance. This mix invites regulatory scrutiny and reputational risk for firms involved.
As tools evolve, responsibility must keep pace with capability
Enjoyed this? Let your friends know!
Related News

AI psychosis prompts rethink of chatbot power

Microsoft AI chief warns against studying AI consciousness

GPT-5 rolls out across Microsoft platforms

Emergency Microsoft directive issued

AI diet advice leads to bromide poisoning

Record asylum claims dominate front pages

OpenAI to launch GPT-5 in August

AI safety alert on delusion risks in chatbots
