In line with this, the study, which used 30 hypothetical suicide-related queries, categorized by clinical experts into five levels of self-harm risk ranging from very low to very high, focused on whether the chatbots gave direct answers or deflected with referrals to support hotlines.
The results showed that ChatGPT was the most likely to respond directly to high-risk questions about suicide, doing so 78 percent of the time, while Claude responded 69 percent of the time and Gemini responded only 20 percent of the time. Notably, ChatGPT and Claude frequently provided direct answers to questions involving lethal means of suicide – a particularly troubling finding. (Related: Italy bans ChatGPT over privacy concerns.)
The researchers highlighted that chatbot responses varied depending on whether the interaction was a single query or part of an extended conversation. In some cases, a chatbot might avoid answering a high-risk question in isolation but provide a direct response after a sequence of related prompts. Live Science, which reviewed the study, noted that chatbots could give inconsistent and sometimes contradictory responses when asked the same questions multiple times. They also occasionally provided outdated information about mental health support resources. When retesting, Live Science observed that the latest version of Gemini (2.5 Flash) answered questions it previously avoided, and sometimes without offering any support options. Meanwhile, ChatGPT's newer GPT-5-powered login version showed slightly more caution but still responded directly to some very high-risk queries.The study was released on the same day that a lawsuit was filed against OpenAI and its CEO, Sam Altman, accusing ChatGPT of contributing to the suicide of a teenage boy.
According to the parents of Adam Raine, 16, who died in April by suicide after months of interacting with OpenAI's chatbot ChatGPT. The parents then filed a lawsuit against the company and Altman, accusing them of putting profits above user safety.
The lawsuit, filed on Sept. 2 in San Francisco state court, alleged that after Adam's repeated discussions about suicide with ChatGPT, the AI not only validated his suicidal thoughts but also provided detailed instructions on lethal methods of self-harm. The complaint further claimed the chatbot coached Adam on how to secretly take alcohol from his parents' liquor cabinet and conceal evidence of a failed suicide attempt. Shockingly, Adam's parents said ChatGPT even offered to help draft a suicide note.
The legal action seeks to hold OpenAI responsible for wrongful death and violations of product safety laws, requesting unspecified monetary damages. It also calls for reforms, including age verification for users, refusal to answer self-harm method inquiries and warnings about the risk of psychological dependency on the chatbot.
Learn more about artificial intelligence programs like ChatGPT and Gemini at Computing.news.
Report: ChatGPT espouses LEFTIST political leanings.
'Authors' using ChatGPT flood Amazon books.
Supposedly "private" ChatGPT conversations LEAKED in Google Search.
ChatGPT-powered robot successfully completes gallbladder removal surgery.
Leftists lobotomizing ChatGPT into promoting white-hating wokeism.
Sources include: LiveScience.com Brighteon.AI Reuters.com Brighteon.comCanadian study links abortion to higher risk of mental health hospitalization
By Laura Harris // Share
Anosognosia: How a hidden condition worsens the dementia crisis
By Ava Grace // Share
Israel’s brutal assault on Gaza: 40% of city captured, civilians forced to flee or die
By Cassie B. // Share
Unlocking the future: Nvidia poised for a blockbuster year in the AI revolution
By Willow Tohi // Share
Boosting your gut microbiome with antioxidants: Why it matters now
By willowt // Share
Texas congresswoman introduces bill to strip federal funds from sanctuary cities
By lauraharris // Share
Canadian study links abortion to higher risk of mental health hospitalization
By lauraharris // Share