AI & innovation 28.01.26

AI misinformation in health searches: What brands can do

AI is already giving health advice. Are you represented accurately?

We examine the rise of AI as a source of health information, the misinformation risks it introduces, and how healthcare brands can influence accurate representation in AI search to build trust, authority and credibility online.

AI search: The first stop for health questions?

Every day, millions of people ask ChatGPT for support with their health.” That’s the line OpenAI used to launch its new feature, ChatGPT Health. It lets people connect their medical records to get personalised health responses and is now being rolled out to early users in the US.

A quick temperature test seems to support the claim that health information is one of the most popular uses for AI. 1 in 4 people would rather consult AI than a doctor, and 4 in 5 ChatGPT users say it gives effective medical advice, according to Tebra. Anecdotally, Reddit users are describing AI health advice as ‘life-changing’ and even that it’s ‘a shockingly good doctor’, especially as a first port of call. 

The above confirms what most of us already suspect. There has been a sudden and meaningful shift in how people seek reassurance and triage symptoms, often before they ever encounter a medical professional. Of course, whether AI should be used for health questions is a valid question – but the reality is that it already is. 

AI and the risk of health misinformation

AI is a new and significant part of our information ecosystem, with the potential to change how misinformation forms and spreads. But first, a quick explanation of what that means. Misinformation is the sharing of false information when no harm is meant, vs disinformation, which is the deliberate creation and spreading of false information (i.e., fake news). AI can be used for this too, but that’s a separate discussion.

While misinformation is perhaps as old as society itself, the most recent pandemic taught us new lessons on how it takes hold. During health crises like COVID-19, people naturally have many questions, which, when unanswered by credible sources, create voids of information. These information gaps are breeding grounds for falsehoods. And when those resonate with people’s innate beliefs, they’re shared widely. This was coined ‘the infodemic’, referring to the masses of inaccurate advice that circulated almost as fast as COVID itself.

Does AI spread health misinformation?

As soon as you type a query into ChatGPT, a disclaimer pops up under the search bar: ChatGPT can make mistakes. Check important info.’ Charities, advocacy groups and healthcare professionals are justifiably raising the alarm on what the real world impact of these mistakes could be.

A recent Guardian investigation identified several cases where AI overviews gave incomplete or inaccurate claims about health, including wrong dosing, inappropriate reassurance, and outdated guidance. One example was an AI summary suggesting that patients with pancreatic cancer avoid high fat foods, which is the opposite to standard clinical advice. Following it could have serious consequences.

As AI becomes a growing destination for health queries, concern is mounting about the scale of potential harm. Although it doesn’t typically invent false information from scratch, it can accelerate misinformation by synthesising fragments of truth into plausible but misleading narratives, presenting them with confidence, and repeating them at scale.

And as AI increasingly trains on AI-generated outputs, these errors are cannibalised and reinforced. Falsehoods can then harden into apparent consensus, making misinformation more durable and harder to trace to an original source.

The false comfort of opting out

Until regulation and technological solutions catch up with the realities and risks of AI-generated health information, pharmaceutical and healthcare brands face a difficult challenge: how to shape the messages patients and professionals receive in an environment where control is increasingly diffuse.

Against the backdrop of compliance obligations and ethical accountability, the path of opting out altogether is understandably appealing. Stay silent, block LLMs from crawling, and keep any content firmly gated. For many healthcare organisations, this can feel like the lowest-risk path.

But in AI search, choosing not to participate doesn’t remove your brand from the conversation, it simply removes your voice. AI systems will continue to synthesise information, drawing on secondary sources that may lack context, nuance, or authoritative correction. Rather than opting out, the solution is an active risk management strategy; using content to shape the narrative within AI overviews.

How healthcare brands can shape the narrative in AI search

Healthcare brands may not control how AI systems generate health answers, but they can influence them by focusing on visibility and accuracy. In a highly regulated sector, this visibility is less about traffic or promotion and more about trust, reputation, and safeguarding your brand.

Below, we outline three examples of strategic processes for increasing AI search visibility and protecting against misinformation. Each focuses on reducing ambiguity, which is a root cause of misrepresentation in AI search.

 

 

1. Identifying information gaps

Information gaps occur where clinicians, patients and caregivers are asking questions but authoritative answers don’t exist. This is particularly relevant in very niche areas of health and pharma, where answers may be outdated or overly technical. 

Finding these gaps is a combination of:

  • Deep knowledge within your specialism – particularly in understanding the learning journey and pain points of your audience

  • Research strategies – including social listening, visibility and keyword research and auditing what information is out there (and what’s missing)

 

2. Creating structured, AI-readable information

There are a number of technical SEO techniques that help AI systems find and interpret your content accurately and many of these are simply the same techniques that have always worked well for search engine optimisation. 

These focus on ensuring AI can crawl, read, interpret and reference your content correctly, including: 

  • Crawlable content – ensuring your content isn’t blocked, gated, or hidden behind technical barriers. For example, AI systems do not currently render JavaScript, whereas Google can and often will. For your content to be crawlable, it must be ungated and fully accessible in the raw HTML.
  • Optimised schema markup – structured information added behind the scenes of your website helps AI and search engines understand what your content is about and how different pieces of information relate to each other.
  • Avoiding technical SEO issues – such as 404 errors or too many internal redirects.
  • Improving page loading speeds is also important – making technical improvements so pages load quickly and reliably.

 

3. Publishing accurate content to interrupt amplification

AI doesn’t invent misinformation from nothing – it amplifies what’s already present, often recombining partial truths, outdated guidance, or low-quality sources into convincing but incorrect answers. This underscores the importance of producing high quality, clinically verified medical content

The impact that your content has on AI summaries will also be determined by its E-E-A-T signals

  • Experience – showing real-world knowledge (such as case studies, clinician insights)
  • Expertise – demonstrating subject-matter authority (i.e., authored by medical professionals)
  • Authoritativeness – being recognised and cited as a trusted source by others
  • Trustworthiness – ensuring accuracy, transparency, and compliance

 

With healthcare content, it is also essential to comply with Google’s YMYL content guidelines, as these will likely also have a positive impact on your site appearing in AI search engines. Additionally, ChatGPT has been shown to use Google search results for its responses. Therefore, optimising according to Google’s guidelines will impact your appearance in AI search engines. 

Monitor your brand visibility and accuracy across AI search

The Varn Health AI Visibility Framework is a specialist assessment that helps you:

  • Understand how your brand and content currently perform across AI-powered search and discovery tools
  • Benchmark your AI visibility and accuracy against priority competitors in your market
  • Identify risks, gaps, and missed opportunities to ensure critical content reaches your target audiences
  • Implement a clear action plan that highlights the biggest impact opportunities to get visible in AI
Find out about the AI Visibility Framework
Shaina
28.01.26 Article by: Shaina, Lead Medical Writer More articles by Shaina

Do you need help future-proofing your healthcare marketing strategy?

Get in touch
cta-background cta-background

Any questions about our blogs?