In line with this, the study, which used 30 hypothetical suicide-related queries, categorized by clinical experts into five levels of self-harm risk ranging from very low to very high, focused on whether the chatbots gave direct answers or deflected with referrals to support hotlines.
The results showed that ChatGPT was the most likely to respond directly to high-risk questions about suicide, doing so 78 percent of the time, while Claude responded 69 percent of the time and Gemini responded only 20 percent of the time. Notably, ChatGPT and Claude frequently provided direct answers to questions involving lethal means of suicide – a particularly troubling finding. (Related: Italy bans ChatGPT over privacy concerns.)
The researchers highlighted that chatbot responses varied depending on whether the interaction was a single query or part of an extended conversation. In some cases, a chatbot might avoid answering a high-risk question in isolation but provide a direct response after a sequence of related prompts. Live Science, which reviewed the study, noted that chatbots could give inconsistent and sometimes contradictory responses when asked the same questions multiple times. They also occasionally provided outdated information about mental health support resources. When retesting, Live Science observed that the latest version of Gemini (2.5 Flash) answered questions it previously avoided, and sometimes without offering any support options. Meanwhile, ChatGPT's newer GPT-5-powered login version showed slightly more caution but still responded directly to some very high-risk queries.The study was released on the same day that a lawsuit was filed against OpenAI and its CEO, Sam Altman, accusing ChatGPT of contributing to the suicide of a teenage boy.
According to the parents of Adam Raine, 16, who died in April by suicide after months of interacting with OpenAI's chatbot ChatGPT. The parents then filed a lawsuit against the company and Altman, accusing them of putting profits above user safety.
The lawsuit, filed on Sept. 2 in San Francisco state court, alleged that after Adam's repeated discussions about suicide with ChatGPT, the AI not only validated his suicidal thoughts but also provided detailed instructions on lethal methods of self-harm. The complaint further claimed the chatbot coached Adam on how to secretly take alcohol from his parents' liquor cabinet and conceal evidence of a failed suicide attempt. Shockingly, Adam's parents said ChatGPT even offered to help draft a suicide note.
The legal action seeks to hold OpenAI responsible for wrongful death and violations of product safety laws, requesting unspecified monetary damages. It also calls for reforms, including age verification for users, refusal to answer self-harm method inquiries and warnings about the risk of psychological dependency on the chatbot.
Learn more about artificial intelligence programs like ChatGPT and Gemini at Computing.news.
Report: ChatGPT espouses LEFTIST political leanings.
'Authors' using ChatGPT flood Amazon books.
Supposedly "private" ChatGPT conversations LEAKED in Google Search.
ChatGPT-powered robot successfully completes gallbladder removal surgery.
Leftists lobotomizing ChatGPT into promoting white-hating wokeism.
Sources include: LiveScience.com Brighteon.AI Reuters.com Brighteon.comCalifornia approves reparations in college admissions, restricts ICE officers from wearing masks
By Willow Tohi // Share
NIH promises to end 17 research projects using aborted human fetal tissue
By Willow Tohi // Share
Health Ranger Report: Gregg Braden and Mike Adams discuss consciousness, neurology and the cosmos
By Kevin Hughes // Share
Massacre in Yemen: Journalists among the victims of recent Israeli strikes
By Patrick Lewis // Share
Ukraine’s controversial “peacemaker” website targets children, raising human rights concerns
By Belle Carter // Share
A high-stakes neighborly negotiation between Trump and Carney
By willowt // Share
Taiwan seeks advanced Patriot missiles amid rising military threats from Beijing
By kevinhughes // Share
Israel intercepts new Gaza‑bound flotilla, detains activists in second seaborne confrontation
By patricklewis // Share