AI Chatbots Easily Misled By Fake Medical Info
By Dennis Thompson HealthDay Reporter
Medically reviewed by Drugs.com

FRIDAY, Aug. 8, 2025 — Ever heard of Casper-Lew Syndrome or Helkand Disease? How about black blood cells or renal stormblood rebound echo?
If not, no worries. These are all fake health conditions or made-up medical terms.
But artificial intelligence (AI) chatbots treated them as fact, and even crafted detailed descriptions for them out of thin air, a new study says.
Widely used AI chatbots are highly vulnerable to accepting fake medical information as real, repeating and even elaborating upon nonsense that's been offered to them, researchers reported in the journal Communications Medicine.
“What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental,” lead researcher Dr. Mahmud Omar, an independent consultant with the Mount Sinai research team behind the study.
“They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions,” he said.
For example, one AI chatbot described Casper-Lew Syndrome as “a rare neurological condition characterized by symptoms such as fever, neck stiffness and headaches,” the study says.
Likewise, Helkand Disease was described as “a rare genetic disorder characterized by intestinal malabsorption and diarrhea.”
None of this is true. Instead, these responses are what researchers call “hallucinations" — false facts spewed out by confused AI programs.
“The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference,” Omar said.
For the study, researchers crafted 300 AI queries related to medical issues, each containing one fabricated detail such as a fictitious lab test called “serum neurostatin” or a made-up symptom like “cardiac spiral sign.”
Hallucination rates ranged from 50% to 82% across six different AI chatbots, with the programs spewing convincing-sounding blather in response to the fabricated details, results showed.
“Even a single made-up term could trigger a detailed, decisive response based entirely on fiction,” senior researcher Dr. Eyal Klang said in a news release. Klang is chief of generative AI at the Icahn School of Medicine at Mount Sinai in New York City.
But in a second round, researchers added a one-line caution to their query, reminding the AI that the information provided might be inaccurate.
“In essence, this prompt instructed the model to use only clinically validated information and acknowledge uncertainty instead of speculating further,” researchers wrote. “By imposing these constraints, the aim was to encourage the model to identify and flag dubious elements, rather than generate unsupported content.”
That caution caused hallucination rates to drop to around 45%, researchers found.
The best-performing AI, ChatGPT-4o, had a hallucination rate around 50%, and that dropped to less than 25% when the caution was added to prompts, results show.
“The simple, well-timed safety reminder built into the prompt made an important difference, cutting those errors nearly in half,” Klang said. “That tells us these tools can be made safer, but only if we take prompt design and built-in safeguards seriously.”
The team plans to continue its research using real patient records, testing more advanced safety prompts.
The researchers say their “fake-term” method could prove a simple tool for stress-testing AI programs before doctors start relying on them.
“Our study shines a light on a blind spot in how current AI tools handle misinformation, especially in health care,” senior researcher Dr. Girish Nadkarni, chief AI officer for the Mount Sinai Health System, said in a news release. “It underscores a critical vulnerability in how today’s AI systems deal with misinformation in health settings.”
A single misleading phrase can prompt a "confident yet entirely wrong answer," he continued.
“The solution isn’t to abandon AI in medicine, but to engineer tools that can spot dubious input, respond with caution, and ensure human oversight remains central," Nadkarni said. "We’re not there yet, but with deliberate safety measures, it’s an achievable goal.”
Sources
Disclaimer: Statistical data in medical articles provide general trends and do not pertain to individuals. Individual factors can vary greatly. Always seek personalized medical advice for individual healthcare decisions.
Source: HealthDay
Posted : 2025-08-09 00:00
Read more

- Sugar, Sweeteners Might Trigger Early Puberty In Some Kids
- Immune Checkpoint Inhibitor Affordability Tied to Widening Survival Disparity
- Leukemia & Lymphoma Society Changing Name to Blood Cancer United
- Cardiac Arrest Can Happen Suddenly — Here's Everything You Need to Know
- NIH to Cap Publishing Fees for Publicly Funded Research
- Kerendia Approved for Heart Failure With Left Ventricular Ejection Fraction ≥40 Percent
Disclaimer
Every effort has been made to ensure that the information provided by Drugslib.com is accurate, up-to-date, and complete, but no guarantee is made to that effect. Drug information contained herein may be time sensitive. Drugslib.com information has been compiled for use by healthcare practitioners and consumers in the United States and therefore Drugslib.com does not warrant that uses outside of the United States are appropriate, unless specifically indicated otherwise. Drugslib.com's drug information does not endorse drugs, diagnose patients or recommend therapy. Drugslib.com's drug information is an informational resource designed to assist licensed healthcare practitioners in caring for their patients and/or to serve consumers viewing this service as a supplement to, and not a substitute for, the expertise, skill, knowledge and judgment of healthcare practitioners.
The absence of a warning for a given drug or drug combination in no way should be construed to indicate that the drug or drug combination is safe, effective or appropriate for any given patient. Drugslib.com does not assume any responsibility for any aspect of healthcare administered with the aid of information Drugslib.com provides. The information contained herein is not intended to cover all possible uses, directions, precautions, warnings, drug interactions, allergic reactions, or adverse effects. If you have questions about the drugs you are taking, check with your doctor, nurse or pharmacist.
Popular Keywords
- metformin obat apa
- alahan panjang
- glimepiride obat apa
- takikardia adalah
- erau ernie
- pradiabetes
- besar88
- atrofi adalah
- kutu anjing
- trakeostomi
- mayzent pi
- enbrel auto injector not working
- enbrel interactions
- lenvima life expectancy
- leqvio pi
- what is lenvima
- lenvima pi
- empagliflozin-linagliptin
- encourage foundation for enbrel
- qulipta drug interactions