ChatGPT “advises” man to poison himself with hallucination-causing diet and now he’s six feet under
By sdwells // 2025-08-15
 
Technology can be your best friend or your worst enemy, including artificial intelligence. Case in point with ChatGPT. A 60-year-old Washington man was hospitalized after unknowingly poisoning himself for months by following incorrect dietary advice from ChatGPT. His case, recently published in the Annals of Internal Medicine, illustrates how misinformation from artificial intelligence can lead to serious, preventable health crises.
  • A 60-year-old man in Washington developed severe bromide poisoning after ChatGPT incorrectly advised him to replace table salt with sodium bromide, leading to paranoia, hallucinations, and hospitalization.
  • The man, on a highly restrictive vegetarian diet with self-distilled water, followed the advice for three months, reaching dangerously high bromide levels (1,700 mg/L vs. normal 0.9–7.3 mg/L) and showing symptoms like confusion, memory loss, rashes, fatigue, and poor coordination.
  • Doctors confirmed the AI-generated advice was reproducible through the same search, warning that chatbots can spread inaccurate medical information lacking context or critical review.
  • After three weeks of treatment with fluids, electrolytes, and psychiatric care, his condition stabilized; the case highlights the need for caution and professional guidance when using AI for health decisions.

Man Accidentally Poisons Himself After Following ChatGPT Diet That Caused Hallucinations

The patient arrived at his local emergency room convinced his neighbor was poisoning him. Within 24 hours, his condition worsened—he developed paranoia, hallucinations, and tried to escape the hospital. He was placed on an involuntary psychiatric hold for his own safety. Upon questioning, the man explained that he had multiple dietary restrictions and followed an “extremely restrictive” vegetarian diet, distilling his own water. After reading about potential harms of sodium chloride (table salt), he turned to ChatGPT for advice on eliminating it from his diet. The chatbot reportedly told him it was safe to replace salt with sodium bromide—a chemical once used as a sedative in the early 20th century and still found in some anticonvulsants for humans and dogs. Trusting the recommendation, he used sodium bromide for three months. Over time, the chemical built up in his system, causing bromism—bromide poisoning, a condition that impairs nerve function and can trigger neurological and psychiatric symptoms. His bromide level reached a staggering 1,700 mg/L, compared to a normal range of 0.9–7.3 mg/L. Symptoms included confusion, memory loss, anxiety, delusions, skin rashes, acne, insomnia, fatigue, muscle coordination problems, and excessive thirst. Physicians at the University of Washington in Seattle replicated his search and confirmed they received the same faulty advice from ChatGPT, underscoring the potential risks of AI-generated health recommendations. Historically, bromide was widely used in sedatives and over-the-counter medicines, but as the dangers of chronic exposure became clear, regulators phased it out of U.S. drug supplies by the mid-20th century. Today, bromism is rare. The man’s treatment involved high volumes of fluids and electrolytes to flush the bromide from his body. It took three weeks for his levels to normalize and for him to be weaned off psychiatric medications. Only then was he cleared for discharge. The case highlights broader concerns about AI reliability in medical contexts. The doctors noted that AI tools like ChatGPT can generate “scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.” They emphasized that it is “highly unlikely” any medical professional would recommend sodium bromide as a salt substitute. OpenAI has acknowledged its chatbots are not intended for diagnosing or treating medical conditions, and newer versions reportedly have improved health-question handling and “flagging” capabilities. Still, the patient appeared to have been using an older version of the software. Physicians warn that as AI adoption grows, healthcare providers should ask patients where they obtain medical advice. This case serves as a stark reminder: while AI may bridge the gap between scientific information and the public, it can also disseminate decontextualized or dangerous recommendations. Users should always verify health information with qualified professionals before making dietary or medical changes. Tune your internet dial to NaturalMedicine.news for more tips on how to use natural remedies for preventative medicine and for healing, instead of trusting AI to “compute” your solutions and drive you to an early grave. Sources for this article include: NaturalNews.com DailyMail.co.uk