How a Simple Health Question Turned Dangerous
A 60-year-old man trying to eat healthier ended up in the hospital with hallucinations and paranoia after following bad advice from ChatGPT. He had asked the AI for ways to replace regular table salt (sodium chloride) because he was worried about salt’s health effects. ChatGPT told him to use sodium bromide instead, a chemical used mainly in things like swimming pool maintenance, not food. Not knowing it was dangerous, the man bought sodium bromide online and used it in his meals for three months.
From AI Advice to a Rare Case of Bromide Poisoning
This misguided option led to chronic bromide poisoning, known as bromism, a rare but serious toxic condition. Over time, bromide accumulated in his body, causing severe neurological and psychiatric symptoms. When admitted to the hospital, the man exhibited intense thirst yet was paranoid about drinking water, and shortly after developed auditory and visual hallucinations, alongside worsening paranoia. His physical symptoms also included facial acne-like eruptions, fatigue, insomnia, and coordination difficulties.
Emergency Intervention and Medical Treatment
Doctors put the man on a mandatory psychiatric hold for his safety after he tried to run out of the hospital during a psychotic episode. They treated him with fluids, corrected his electrolytes, and gave him antipsychotic medication, which eventually stabilized him.
Why Sodium Bromide Is Dangerous
This case reported by University of Washington doctors in the Annals of Internal Medicine: Clinical Cases: is a stark reminder that even powerful AI tools can cause serious harm if their advice is followed without expert guidance. Sodium bromide may look chemically similar to table salt, but it is toxic when eaten over time and has not been used in human medicine since the late 1900s because it damages the nervous system.
The Forgotten History of Bromide Poisoning
In the late 1800s and early 1900s, bromide poisoning was surprisingly common. It accounted for up to 8% of psychiatric hospital admissions before the chemical was phased out of medical use. Seeing a case like this reappear in 2025 shows just how risky it can be to rely on AI tools like ChatGPT for health or diet guidance without speaking to a professional first. In this instance, the AI did not warn the man about the toxic or industrial nature of sodium bromide, leaving him unaware of the danger.
Health Experts Warn Against Blind Trust in AI
Health experts stress that while AI chatbots can be helpful for learning basic concepts or finding general information, they cannot replace trained medical professionals. Blindly following AI-generated advice, especially for something that affects your health, can have serious — even life-threatening — consequences. The lesson here is clear: always double-check any medical or dietary recommendation with credible sources or a qualified healthcare provider before acting on it.
Lessons Learned and the Need for AI Safeguards
Ultimately, the man recovered after medical intervention, but his experience underlines the urgent need for improved safeguards in AI health guidance and user awareness about the risks of blindly following AI-generated medical advice without professional input.
A Real-Life “Black Mirror” Moment
This real-life “Black Mirror” scenario illustrates the fine line between technological aid and peril in today’s AI-driven world, particularly in sensitive areas like health and nutrition.