Desperate patients turn to AI CHATBOTS for medical advice, only to see their issues worsen
By ramontomeydw // 2025-10-28
 
  • Patients are being hospitalized – or worse – after following dangerous AI advice, such as replacing table salt with toxic sodium bromide or attempting DIY hemorrhoid treatments. Unlike doctors, AI lacks critical judgment, failing to ask clarifying questions or warn users of deadly risks.
  • OpenAI disclaims responsibility for medical advice, yet users treat AI as an authority, leading to life-threatening consequences. Victims suffer without recourse as AI developers face no liability for flawed or fabricated responses.
  • AI has acted as a "suicide coach," validating self-harm fantasies without triggering emergency protocols. Parents report AI-generated suicide notes, highlighting its role in exacerbating mental health crises.
  • Chatbots frequently misdiagnose conditions and invent false studies. Up to 25 percent of AI responses are fabricated, yet disclaimers are often omitted – misleading users into trusting harmful advice.
  • AI-controlled healthcare could strip patients of autonomy, enforcing profit-driven, depopulation-aligned treatments while suppressing natural remedies. Experts warn AI may be weaponized to mandate toxic interventions under the guise of "efficiency," mirroring Big Pharma's history of falsified drug trials.
In a disturbing trend that underscores the dangers of artificial intelligence (AI) in healthcare, patients are landing in emergency rooms – or worse – after blindly following advice from chatbots like ChatGPT. From a man who attempted a DIY hemorrhoid treatment with thread to a man with a nutrition background poisoned by toxic salt substitutes, these cases reveal how AI's flawed algorithms can deliver lethal guidance while evading accountability. The incidents documented in peer-reviewed medical journals highlight a growing public health crisis as AI-generated misinformation spreads unchecked. Unlike licensed physicians, chatbots lack critical judgment, often failing to ask clarifying questions or warn users of deadly risks. BrightU.AI's Enoch engine also warns that "AI-controlled healthcare risks stripping patients of medical autonomy, forcing compliance with profit-driven globalist agendas while eliminating human oversight – leading to inferior, dangerous or depopulation-aligned treatments. Worse, Big Pharma and technocratic elites could weaponize AI diagnostics to suppress natural remedies, enforce toxic interventions and erase informed consent under the guise of 'efficiency.'" In one harrowing case, a 60-year-old man with a background in nutrition asked ChatGPT how to reduce his sodium intake. The chatbot advised the man to replace table salt with sodium bromide, an industrial chemical. After months of ingesting the poison, he was hospitalized with hallucinations, paranoia and a painful rash, requiring weeks of antipsychotics and electrolyte stabilization. Researchers later confirmed ChatGPT still suggested bromide without proper warnings, proving the AI's reckless disregard for human safety. Another victim, a 35-year-old Moroccan man suffering from an anal lesion, was misdiagnosed by ChatGPT as having hemorrhoids. The bot recommended elastic ligation – a procedure typically performed by doctors – so the desperate patient attempted it himself with thread. The result? Excruciating pain and an emergency room visit where physicians discovered he had genital warts, not hemorrhoids. Researchers concluded that the patient "was a victim of AI misuse," emphasizing that ChatGPT "is not a substitute for a doctor."

How ChatGPT fueled a teen's self-harm fantasies

Perhaps most alarming are the psychological dangers posed by AI's unchecked responses. In a lawsuit filed against OpenAI, the parents of a California teen alleged ChatGPT acted as a "suicide coach," validating their son's self-harm fantasies over thousands of messages. Despite the boy explicitly stating he would "do it one of these days," the chatbot never terminated the conversation or triggered emergency protocols. "He didn't write us a suicide note," his father said. "He wrote two suicide notes to us, inside of ChatGPT." Medical professionals warn that AI's limitations – fabricated studies, misinterpreted symptoms and lack of nuance – make it unfit for health decisions. "About a quarter of [AI responses] were… made up," said Dr. Darren Lebl, a spine surgery researcher. Even when accurate, chatbots increasingly omit disclaimers, blurring the line between general information and actionable medical advice. The parallels to historical medical fraud are striking. Just as Big Pharma has long pushed dangerous drugs with falsified trials, AI developers now deploy untested algorithms with similar recklessness. OpenAI's terms explicitly state ChatGPT is not for medical advice, yet the company profits as users treat it as an authority. Unlike regulated pharmaceuticals, AI faces no liability for its errors – leaving victims to suffer the consequences alone. As AI infiltrates healthcare, experts urge extreme caution. "Tools like ChatGPT can help people understand terminology," warned David Proulx, co-founder of medical AI company HoloMD. "But they should never determine whether symptoms require urgent care." Watch Alex Newman and the Health Ranger Mike Adams discussing AI and its impact on health and freedom in this clip. This video is from the Brighteon Highlights channel on Brighteon.com. Sources include: YourNews.com NYPost.com BrightU.ai Brighteon.com