favicon

T4K3.news

AI health tips prompt real danger

A New York man suffered bromide poisoning after following AI health advice, underscoring safety gaps in online medical guidance.

August 16, 2025 at 08:32 AM
blur A 60-Year-Old Man Who Turned To ChatGPT For Diet Advice Ended Up Poisoning Himself And Landed In The Hospital

A New York man developed bromide poisoning after following dietary advice from an AI chatbot, highlighting safety gaps in AI health guidance.

AI health tips lead to bromide poisoning in New York man

A 60-year-old man in New York substituted table salt with sodium bromide after asking ChatGPT for a salt substitute. He consumed the substitute daily for three months and developed bromism, including paranoia, insomnia and psychosis, which sent him to the hospital.

Key Takeaways

✔️
AI health tips can lead to dangerous substitutions
✔️
Bromide poisoning is a real and serious risk
✔️
Chatbots are not medical professionals and should not replace clinicians
✔️
Safety guardrails are needed for health topics in AI tools
✔️
Users should consult doctors before making health decisions
✔️
Public health messaging should address online health guidance risks

"Chatbots cannot be medical professionals and offer the same judgment and responsibility"

Direct warning about AI limits in health care

"No warning of toxicity was given"

AI responses suggesting bromide without safety warnings

"Curiosity should not overlap with caution and should never outweigh professional advice"

Editorial warning on AI use

"ChatGPT was giving such responses and suggesting bromide as an alternative to chloride"

Medical specifics

The case, published in the Annals of Internal Medicine and covered by NBC News, shows that the chatbot suggested bromide as a substitute and did not warn about toxicity. It underscores that chatbots are not medical professionals and should not replace professional medical advice, calling for stronger safety guardrails in AI health guidance.

This incident sheds light on a broader risk: people may treat AI as a source of medical guidance for everyday choices. It raises questions about responsibility when harm occurs and about how developers design prompts and safety nets for health topics. It also points to a public health need for clear warnings and for better oversight of online recommendations about potentially dangerous chemicals.

Highlights

  • Chatbots cannot be medical professionals.
  • No warning of toxicity was given.
  • Curiosity should never outweigh professional advice.
  • ChatGPT suggested bromide as an alternative to chloride.

Health risk tied to AI health guidance

A case in which a user followed bot health advice led to bromide poisoning. It spotlights the potential harm of AI health guidance and the need for safety guardrails.

Guardrails for AI health advice are not optional, they are essential.

Enjoyed this? Let your friends know!

Related News