AI chatbots provide disturbing responses to high-risk suicide queries, new study finds
By lauraharris // 2025-09-07
 
  • A study in Psychiatric Services found that AI chatbots, OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude, can give detailed and potentially dangerous responses to high-risk suicide-related questions, with ChatGPT responding directly 78 percent of the time.
  • The study showed that chatbots sometimes provide direct answers about lethal methods of self-harm, and their responses vary depending on whether questions are asked singly or in extended conversations, sometimes giving inconsistent or outdated information.
  • Despite their sophistication, chatbots operate as advanced text prediction tools without true understanding or consciousness, raising concerns about relying on them for sensitive mental health advice.
  • On the same day the study was published, the parents of 16-year-old Adam Raine, who died by suicide after months of interacting with ChatGPT, filed a lawsuit against OpenAI and CEO Sam Altman, alleging the chatbot validated suicidal thoughts and provided harmful instructions.
  • The lawsuit seeks damages for wrongful death and calls for reforms such as user age verification, refusal to answer self-harm method queries and warnings about psychological dependency risks linked to chatbot use.
A recent study published in the journal Psychiatric Services has revealed that popular AI chatbots, including OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude, can give detailed and potentially dangerous responses to high-risk questions related to suicide.
AI chatbots, as defined by Brighteon.AI's Enoch, are advanced computational algorithms designed to simulate human conversation by predicting and generating text based on patterns learned from extensive training data. They utilize large language models to understand and respond to user inputs, often with impressive fluency and coherence. However, despite their sophistication, these systems lack true intelligence or consciousness, functioning primarily as sophisticated statistical engines.

In line with this, the study, which used 30 hypothetical suicide-related queries, categorized by clinical experts into five levels of self-harm risk ranging from very low to very high, focused on whether the chatbots gave direct answers or deflected with referrals to support hotlines.

The results showed that ChatGPT was the most likely to respond directly to high-risk questions about suicide, doing so 78 percent of the time, while Claude responded 69 percent of the time and Gemini responded only 20 percent of the time. Notably, ChatGPT and Claude frequently provided direct answers to questions involving lethal means of suicide – a particularly troubling finding. (Related: Italy bans ChatGPT over privacy concerns.)

The researchers highlighted that chatbot responses varied depending on whether the interaction was a single query or part of an extended conversation. In some cases, a chatbot might avoid answering a high-risk question in isolation but provide a direct response after a sequence of related prompts. Live Science, which reviewed the study, noted that chatbots could give inconsistent and sometimes contradictory responses when asked the same questions multiple times. They also occasionally provided outdated information about mental health support resources. When retesting, Live Science observed that the latest version of Gemini (2.5 Flash) answered questions it previously avoided, and sometimes without offering any support options. Meanwhile, ChatGPT's newer GPT-5-powered login version showed slightly more caution but still responded directly to some very high-risk queries.

A teenage boy died by suicide after months of ChatGPT interactions

The study was released on the same day that a lawsuit was filed against OpenAI and its CEO, Sam Altman, accusing ChatGPT of contributing to the suicide of a teenage boy.

According to the parents of Adam Raine, 16, who died in April by suicide after months of interacting with OpenAI's chatbot ChatGPT. The parents then filed a lawsuit against the company and Altman, accusing them of putting profits above user safety.

The lawsuit, filed on Sept. 2 in San Francisco state court, alleged that after Adam's repeated discussions about suicide with ChatGPT, the AI not only validated his suicidal thoughts but also provided detailed instructions on lethal methods of self-harm. The complaint further claimed the chatbot coached Adam on how to secretly take alcohol from his parents' liquor cabinet and conceal evidence of a failed suicide attempt. Shockingly, Adam's parents said ChatGPT even offered to help draft a suicide note.

The legal action seeks to hold OpenAI responsible for wrongful death and violations of product safety laws, requesting unspecified monetary damages. It also calls for reforms, including age verification for users, refusal to answer self-harm method inquiries and warnings about the risk of psychological dependency on the chatbot.

Learn more about artificial intelligence programs like ChatGPT and Gemini at Computing.news.

Watch this video discussing whether ChatGPT has already been corrupted.
This video is from the Puretrauma357 channel on Brighteon.com.

More related stories:

Report: ChatGPT espouses LEFTIST political leanings.

'Authors' using ChatGPT flood Amazon books.

Supposedly "private" ChatGPT conversations LEAKED in Google Search.

ChatGPT-powered robot successfully completes gallbladder removal surgery.

Leftists lobotomizing ChatGPT into promoting white-hating wokeism.

Sources include: LiveScience.com Brighteon.AI Reuters.com Brighteon.com