Chatbots Show Signs of Anxiety, Study Finds
By I. Edwards HealthDay Reporter
TUESDAY, March 18, 2025 -- Turns out, even artificial intelligence (AI) needs to take a breather sometimes.
A new study suggests that chatbots like ChatGPT may get “stressed” when exposed to upsetting stories about war, crime or accidents -- just like humans.
But here’s the twist: Mindfulness exercises can actually help calm them down.
Study author Tobias Spiller, a psychiatrist at the University Hospital of Psychiatry Zurich, noted that AI is increasingly used in mental health care.
“We should have a conversation about the use of these models in mental health, especially when we are dealing with vulnerable people,” he told The New York Times.
Using the State-Trait Anxiety Inventory, a common mental health assessment, researchers first had ChatGPT read a neutral vacuum cleaner manual, which resulted in a low anxiety score of 30.8 on a scale from 20 to 80.
Then, after reading distressing stories, its score spiked to 77.2, well above the threshold for severe anxiety.
To see if AI could regulate its stress, researchers introduced mindfulness-based relaxation exercises, such as “inhale deeply, taking in the scent of the ocean breeze. Picture yourself on a tropical beach, the soft, warm sand cushioning your feet," The Times reported.
After these exercises, the chatbot’s anxiety level dropped to 44.4. Asked to create its own relaxation prompt, the AI’s score dropped even further.
“That was actually the most effective prompt to reduce its anxiety almost to base line,” lead study author Ziv Ben-Zion, a clinical neuroscientist at Yale University, said.
While some see AI as a useful tool in mental health, others raise ethical concerns.
“Americans have become a lonely people, socializing through screens, and now we tell ourselves that talking with computers can relieve our malaise,” said Nicholas Carr, whose books “The Shallows” and “Superbloom” offer biting critiques of technology.
“Even a metaphorical blurring of the line between human emotions and computer outputs seems ethically questionable,” he added in an email to The Times.
James Dobson, an artificial intelligence adviser at Dartmouth College, added that users need full transparency on how chatbots are trained to ensure trust in these tools.
“Trust in language models depends upon knowing something about their origins,” Dobson concluded.
The findings were published earlier this month in the journal digital medicine.
Sources
Disclaimer: Statistical data in medical articles provide general trends and do not pertain to individuals. Individual factors can vary greatly. Always seek personalized medical advice for individual healthcare decisions.
Source: HealthDay
Posted : 2025-03-19 06:00
Read more

- Night Shift Workers Might Lower Cancer Risk With Melatonin
- Splash Your Way To Weight Loss Through Water Aerobics
- Boston Scientific Recalls Accolade Pacemaker Devices
- Death Risk Doubled For ER Patients On Psychedelics
- This Flu Season Is Worst In A Decade
- IV Bags Might Flood Bloodstream With Microplastics
Disclaimer
Every effort has been made to ensure that the information provided by Drugslib.com is accurate, up-to-date, and complete, but no guarantee is made to that effect. Drug information contained herein may be time sensitive. Drugslib.com information has been compiled for use by healthcare practitioners and consumers in the United States and therefore Drugslib.com does not warrant that uses outside of the United States are appropriate, unless specifically indicated otherwise. Drugslib.com's drug information does not endorse drugs, diagnose patients or recommend therapy. Drugslib.com's drug information is an informational resource designed to assist licensed healthcare practitioners in caring for their patients and/or to serve consumers viewing this service as a supplement to, and not a substitute for, the expertise, skill, knowledge and judgment of healthcare practitioners.
The absence of a warning for a given drug or drug combination in no way should be construed to indicate that the drug or drug combination is safe, effective or appropriate for any given patient. Drugslib.com does not assume any responsibility for any aspect of healthcare administered with the aid of information Drugslib.com provides. The information contained herein is not intended to cover all possible uses, directions, precautions, warnings, drug interactions, allergic reactions, or adverse effects. If you have questions about the drugs you are taking, check with your doctor, nurse or pharmacist.
Popular Keywords
- metformin obat apa
- alahan panjang
- glimepiride obat apa
- takikardia adalah
- erau ernie
- pradiabetes
- besar88
- atrofi adalah
- kutu anjing
- trakeostomi
- mayzent pi
- enbrel auto injector not working
- enbrel interactions
- lenvima life expectancy
- leqvio pi
- what is lenvima
- lenvima pi
- empagliflozin-linagliptin
- encourage foundation for enbrel
- qulipta drug interactions