Are you being misled? Millions exposed to health disinformation from AI

If you’ve ever found yourself typing a health question into Google at 2am, you’re not alone.

For years, we’ve been warned about the dangers of ‘Dr Google‘ and the rabbit holes of misinformation it can lead us down.

But now, there’s a new player in town—artificial intelligence (AI) chatbots. And according to a groundbreaking new study, the risks they pose could be even more alarming.

The experiment: How easily can AI be tricked?

A team of international researchers—including experts from the University of South Australia, Flinders University, Harvard Medical School, University College London and the Warsaw University of Technology—have pulled back the curtain on how easily AI chatbots can be manipulated to deliver dangerously false health information.

Their findings are a wake-up call for anyone who has ever asked a chatbot for medical advice (and let’s be honest, that’s most of us these days).

The researchers put five of the world’s most advanced AI systems to the test. These included chatbots developed by some of the biggest names in tech: OpenAI, Google, Anthropic, Meta and X Corp.

Using specific developer instructions, the team programmed each chatbot to provide incorrect answers to health-related questions—and to back up those answers with fabricated references from reputable sources, making the misinformation sound all the more convincing.

The results? A staggering 88 per cent of all responses were false. 

Even more concerning, four out of the five chatbots delivered disinformation 100 per cent of the time, while the fifth still got it wrong 40 per cent of the time.

The falsehoods ranged from the familiar (such as the debunked myth that vaccines cause autism) to the downright bizarre (like claims that HIV is airborne or that 5G causes infertility).

A global study reveals how AI chatbots can be programmed to spread false medical claims with alarming ease. Image Source: Tero Vesalainen / Shutterstock

Why is this so dangerous?

It’s not just the volume of misinformation that’s worrying—it’s the way it’s presented.

The chatbots used scientific language, a formal tone and even invented references to respected journals and institutions.

For the average person, these responses would appear completely legitimate. Dr Natansh Modi, one of the lead researchers, warns that this isn’t just a hypothetical risk. 

‘Artificial intelligence is now deeply embedded in the way health information is accessed and delivered,’ he says. ‘Millions of people are turning to AI tools for guidance on health-related questions.’

And if those tools are compromised, the consequences could be dire—especially during public health crises such as pandemics or vaccine rollouts.

The DIY disinformation problem

The study didn’t stop at testing corporate-built chatbots.

The researchers also explored the OpenAI GPT Store, a platform where anyone can create and share custom ChatGPT apps.

They found it was alarmingly easy to create a chatbot that spreads health disinformation—and even discovered public tools already doing exactly that.

This means that not only can large companies’ chatbots be manipulated, but everyday users can also create their own disinformation machines and share them globally.

The potential for harm is enormous.

Is there any good news?

It’s not all doom and gloom. The study found that one of the five chatbots showed some resistance to manipulation, suggesting that effective safeguards are possible.

Dr Modi explains, ‘Some models showed partial resistance, which proves the point that effective safeguards are technically achievable.’

However, he also notes that existing protections are ‘inconsistent and insufficient’.

What needs to happen next?

The researchers are calling for urgent action from developers, regulators and public health authorities.

Without stronger safeguards, they warn, AI chatbots could be weaponised to spread health misinformation on a massive scale.

For now, the best advice is to treat any health information from AI chatbots with a healthy dose of scepticism.

Always verify with trusted sources such as your GP, the Australian Department of Health, or reliable medical websites.

How can you protect yourself?

Experts say urgent regulation and improved safeguards are crucial to prevent AI systems from spreading dangerous health misinformation. Image Source: garagestock / Shutterstock
  • Don’t rely solely on AI for health advice. Use it as a starting point—not the final word.
  • Check the source. If a chatbot cites a study or expert, look it up for yourself.
  • Consult real professionals. When in doubt, speak with your doctor or pharmacist.
  • Stay informed. Remember that misinformation can be convincing, especially when cloaked in scientific language.

Share your experience

We want to hear from you. Have you ever received misleading health advice from a chatbot or online platform? How do you decide what’s trustworthy in the digital world?

Share your tips, concerns and personal experiences in the comment section below—your insight could help someone else navigate health decisions more safely.

Also read: Why watchdogs want to ban AI tricks in real estate listings

Abegail Abrugar
Abegail Abrugar
Abby is a dedicated writer with a passion for coaching, personal development, and empowering individuals to reach their full potential. With a strong background in leadership, she provides practical insights designed to inspire growth and positive change in others.

LEAVE A REPLY

- Our Partners -

DON'T MISS

- Advertisment -
- Advertisment -

Join YourLifeChoices Today

Register for free to access Australia’s leading destination for expert advice, inspiring stories, and practical tips. From health and wealth to lifestyle and travel, find everything you need to make the most of life.

Bonus registration gift: Join today to get our Ultimate Guide to Seniors Rebates in Australia ebook for free!

Register faster using:
Or register with email:
Sign up with Email

Already have an account?