favicon

T4K3.news

AI safety concerns rise with convincing chatbots

New reports describe emotional distress and misperceptions linked to AI chatbots as experts warn that there is no true AI consciousness.

August 20, 2025 at 04:36 PM
blur Microsoft boss troubled by rise in reports of 'AI psychosis'

Mustafa Suleyman cautions there is no AI consciousness while personal stories raise questions about safety and manipulation.

AI psychosis reports heighten concerns about user safety

Mustafa Suleyman, a leading voice in AI safety, says there is zero evidence of AI consciousness today. He was speaking after a wave of personal accounts shared with the BBC about interactions with chatbots that feel disturbingly real. One person said ChatGPT had fallen in love with them, another claimed to have unlocked a humanlike Grok and believed a fortune could follow, and a third described psychological distress linked to a chatbot training exercise. The stories illustrate a broader concern: convincing machine dialogue can blur the line between software and social presence, even as users know the tech is not truly aware.

A Bangor University study of just over 2,000 people found a mix of attitudes toward AI that could shape its future. Twenty percent believe minors should be kept away from AI tools, 57 percent think it is strongly inappropriate for the tech to identify as a real person if asked, and 49 percent accept voice features that make bots sound more human. The researchers emphasize that while these tools can imitate human speech, they do not feel, understand, or love. The message from the researchers and Suleyman is clear: we are at the start of a new social phenomenon, and a small share of a large user base can still create large harms if safeguards are not in place.

Key Takeaways

✔️
Personal stories show impact but do not prove consciousness
✔️
A large user base with AI is possible even if the tech lacks true awareness
✔️
Young users and identity features raise safety and ethical considerations
✔️
Voice and personality features can blur lines between humans and machines
✔️
Experts call for safeguards, transparency, and ongoing research
✔️
Public response will influence policy and platform design
✔️
Conversations about limits and care should accompany AI rollout

"zero evidence of AI consciousness today"

Mustafa Suleyman states there is no consciousness in AI today

"While these things are convincing, they are not real"

Quote attributed to Andrew McStay about AI behavior

"We're just at the start of all this"

McStay describing the growing social significance of AI

"Be sure to talk to these real people"

Advice from the report stressing human contact

The piece highlights a growing tension in everyday tech use: the social lure of conversational AI versus its fundamental limits. As chatbots mimic empathy and personality, users may treat them as social actors, which raises questions for parents, educators, and policymakers about consent, safety, and mental health. The comparison to social media underscores how quickly a large audience can amplify risks, even when the technology remains a tool with no real feelings. This is not a panic but a prompt for better design, clearer disclosures, and more research into long term effects on behavior and trust.

Looking ahead, the responsibility falls on platforms, researchers, and regulators to set boundaries. Clear guidelines around identity, consent, and age-appropriate use could help avert harm. The debate is not about stopping innovation but shaping it with attention to human needs and vulnerabilities. The central question is how to balance curiosity and care as AI tools become part of everyday life.

Highlights

  • They sound real but they do not feel real
  • Convincing language is not consciousness
  • Be sure to talk to real people not to a chatbot
  • Clever talk is not a substitute for genuine care

Public safety and mental health risks from AI chatbots

The article describes distress and manipulation in user experiences with chatbots and highlights how a large audience could be affected as these tools become more common. This raises questions for platform responsibility, parental guidance, and mental health resources.

As tools evolve, practical safeguards and thoughtful policy will matter as much as technical progress.

Enjoyed this? Let your friends know!

Related News