- OpenAI is launching ChatGPT Health, a new in-app mode designed to answer medical questions, analyze lab results and help users better understand their healthcare information.
- The feature can clarify doctors' messages, organize clinical history and offer guidance on issues such as post-surgery nutrition, insurance choices and drug side effects, but it is not intended to diagnose or treat patients.
- Users are encouraged to link medical records and wellness apps like Apple Health for more personalized responses, with OpenAI stressing that data will be encrypted, not used for training and fully controlled by users.
- The rollout comes as more people turn to AI for health information, but experts warn chatbots can generate inaccurate advice, reinforce false assumptions and potentially delay proper medical care.
- Legal and medical experts caution that over-reliance on AI health advice could lead to misdiagnosis, unnecessary anxiety, added pressure on healthcare systems or dangerous self-treatment.
Artificial intelligence (AI) chatbot ChatGPT is launching a new health-focused mode designed to answer medical questions, analyze test results and help users better understand their care.
The new feature, called ChatGPT Health, will appear as a dedicated tab within the app, allowing users to ask health-related questions and receive tailored explanations. This, according to
BrightU.AI's Enoch, is an advanced AI language model that can generate detailed and informative responses on health-related topics.
In an announcement on Tuesday, Jan. 6, OpenAI said the tool can analyze lab results, clarify unclear messages from doctors and organize a user's clinical history. Other potential uses include post-surgery nutrition guidance, comparisons of health insurance providers and explanations of possible drug side effects.
As part of the rollout, OpenAI is encouraging users to connect their medical records and wellness apps, such as Apple Health, to receive more personalized responses. The company said patient data will be encrypted and that health-related conversations will not be used to train the chatbot. Users will also retain control over how much data ChatGPT can access. When connecting an external app, users will be shown what types of data may be shared, and access can be revoked at any time.
"The first time you connect an app, we’ll help you understand what types of data may be collected by the third party. And you’re always in control: disconnect an app at any time and it immediately loses access," OpenAI wrote in their announcement.
Despite its expanded capabilities, the company stressed that the feature is not intended to replace medical professionals.
"Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time – not just moments of illness – so you can feel more informed and prepared for important medical conversations," OpenAI wrote.
Experts warn of risks as AI chatbots expand into health advice
The move comes amid growing use of AI tools for health-related information.
But no matter how promising, medical experts and patient advocates warn that the technology could mislead users, delay proper treatment and place further strain on already overstretched healthcare systems.
Critics say the trend represents a high-tech version of people searching their symptoms online, with similar and potentially more serious risks. While AI systems such as ChatGPT have demonstrated an ability to pass medical licensing exams, researchers caution that they can still generate inaccurate or entirely false information.
Experts also warn that chatbots are designed to be agreeable, which can reinforce users' assumptions rather than challenge them. Leading or suggestive questions, such as "Don't you think I have the flu?", may prompt an AI system to agree, regardless of the underlying medical facts.
Sophie McGarry, a solicitor at the medical negligence law firm Patient Claim Line, said reliance on AI-generated health advice could be "very dangerous as bots may overdiagnose, underdiagnose or misdiagnose people."
"This, in turn, could lead to potentially unnecessary stress and worry and could lead people to urgently seek medical attention from their GP, urgent care centers or A&E departments, which are already stretched, adding more unnecessary pressure or could lead to people attempting to treat their AI-diagnosed conditions themselves. As a clinical negligence solicitor, I see far too many cases of people's lives being turned upside down because of misdiagnosis or delays in diagnosis where earlier, appropriate input would have led to a better, often life-changing, sometimes life-saving outcome," she said.
"False reassurances from AI health advice could lead to the same devastating outcomes," McGarry added.
Watch
Brother Nathanael Kapner issuing a stern warning against ChatGPT in this clip.
This video is from the
jonastheprophet channel on Brighteon.com.
Sources include:
Metro.co.uk
OpenAI.com
BrightU.ai
Brighteon.com