Does Information Source Matter in Health Communication? A Public Perspective
Paper Title: ChatGPT, is the influenza vaccination useful? Comparing Perceived Argument Strength and Correctness of Pro-vaccination Arguments from AI and Medical Experts.
Author(s) and Year: Selina A. Beckmann, Elena Link, and Marko Bachl (2025).
Journal: Journal of Science Communication (open access)
TL;DR: The study investigated how convincing people found pro-vaccination arguments presented by AI and medical experts. Arguments written by experts were consistently rated higher than those written by AI, and this difference was reinforced when people knew who wrote the argument. Arguments from both sources provided concrete and accurate reasons to get vaccinated against influenza; however, people’s trust in science and experts significantly influenced their confidence in these arguments.
Why I chose this paper: It demonstrates a crucial principle: trust in health information is built not just on facts but on the credibility of the source. This reinforces my commitment to transparent, expert-backed information, which is vital to counter misinformation, and empower public health decisions.
Have you ever second-guessed health advice from an AI? Let’s say you’re considering a vaccine, and both ChatGPT and experts lay out convincing arguments. Who would you trust?
At a time when generative AI such as ChatGPT can simplify complex health topics and provide instant answers, people still have mixed feelings about accepting medical guidance from a chatbot, even when the content seems solid.
A recent study tests this by comparing how German participants perceived arguments generated by AI versus those from medical experts on flu vaccination. The result? Expert arguments consistently topped, especially when people knew the source.
The Background
What do we know about AI-generated health information content and what remains unclear?
Although AI-generated health content shows strong potential for improving access to health information, concerns about its accuracy and reliability remain. Regardless, research has shown that such information meets standard quality and accuracy.
While researchers and experts have discussed and positively evaluated the quality of AI-generated health information, very little research has examined how the public perceives its quality. Therefore, if AI is to realize its full potential, it is essential to understand how the public perceives AI.
To gain insight, the researchers investigated the following:
- How do individuals in Germany assess AI-generated versus expert-provided arguments about influenza vaccination?
- Does revealing the author change perceptions of an argument’s strength (is it clear, relevant, and useful) and its correctness (is it accurate)?
- Do moderators such as trust in AI, science, leading health institutions (e.g., STIKO), and Innovativeness (openness to new technologies) influence their perceptions?
Why a vaccine? Unwillingness to take vaccines is the number one threat to global health. Findings from this study can help us understand the public’s view on AI-generated health information and shape its use in health communication.
The Methods
To compare how participants evaluated AI- and expert-generated arguments on the flu vaccine, the researchers conducted an online two-part study.
Study 1: The blind test
294 German participants rated eight unlabeled informational texts on the benefits of influenza vaccination provided by the researchers. Four were generated by ChatGPT (GPT-3) and four by experts. Using a typical user prompt: “Why should I get vaccinated against influenza?”, ChatGPT produced comprehensive argument lists from which recurring topics were identified: protection against severe illness, the community, the healthcare system, and reduction of individual risk. One argument per topic was randomly selected. Expert arguments were taken from major German public-health institutions, matched to the same topics, and randomly selected to ensure a fair comparison. Participants rated argument strength and correctness.
Study 2: The source reveal
1,029 participants read the same highly rated expert argument from study one. But the source label varied: STIKO, ChatGPT, Collaboration (STIKO & ChatGPT), and no label. After rating it, they completed a manipulation check by identifying the source. Those who failed were excluded so that the results reflected participants who had noticed the label.
To understand why participants reacted differently to the labels, the researchers also measured moderators such as trust and innovativeness.
Trust was assessed using a single question: “To what extent do you trust the following institutions or technologies?” Participants rated their trust in science, the Standing Committee on Vaccination (STIKO), and AI on a 5-point Likert scale, ranging from 1 (“Not at all”) to 5 (“Completely”). Innovativeness was measured using seven statements adapted for AI, including “I am suspicious of new inventions and ways of thinking related to artificial intelligence.” Participants indicated their level of agreement on a 5-point Likert scale, ranging from 1 (“Strongly disagree”) to 5 (“Strongly agree”)
The Results
In the blind test, participants rated both AI-generated and expert-written arguments as strong and correct sources of information about the benefits of influenza vaccination. However, expert arguments received a remarkably higher overall rating. Participants rated expert arguments as stronger (3.59 vs. 3.35) and more correct (3.81 vs. 3.68), and these differences were statistically significant.
In Study 2, revealing the source of the argument significantly influenced participants’ perception of its quality. Arguments labelled as medical experts received the highest ratings for both strength and correctness. Arguments without a label were rated slightly lower but still performed better than those labelled as AI-generated. In contrast, arguments labeled as ChatGPT received the lowest ratings overall. Surprisingly, labeling the argument as a collaboration between experts and AI did not improve perceptions. Instead, they were rated significantly lower than expert-only arguments and similar to AI-labelled arguments. This suggests that merely revealing AI involvement in health communication triggers an automatic credibility discount.
Moderators such as trust in science and STIKO consistently strengthened participants’ preference for expert-labelled arguments. Participants with higher levels of trust in health institutions showed an even stronger preference in how they rated expert versus AI-labelled information. Notably, trust in STIKO, but not trust in science in general, also led participants to rate collaboration labels slightly higher than AI alone. Even those with lower trust in health institutions still rated expert arguments as more correct, but preference grew stronger as trust increased. In contrast, trust in AI and personal innovativeness did not significantly affect participants’ evaluation of AI-generated arguments.
Implications of AI use in future health communication
AI has the potential to provide useful, high-quality health information. However, this potential depends on the user’s perception of the information quality.
For now, disclosing AI’s use erodes public trust, as evidenced by the low rating for the “collaboration” label in which an expert source and AI were used.
Hence, to effectively integrate AI:
- Publicly state that AIs are trained on data vetted by trusted medical experts
- Equip users to assess the quality, relevance, and accuracy of AI health information
- Have medical experts explicitly endorse AI-generated content to lend it credibility
- Develop AI tools under the ownership and management of trusted medical institutions (e.g., RKI or CDC) to leverage existing public trust.
While people are still sceptical of AI, the safest and most effective strategy is to use it as a behind-the-scenes tool for experts, not as the public spokesperson.
Hence, as AI-generated content becomes more common, clearly indicating that health information comes from medical experts may help the public evaluate it more positively. At the same time, maintaining public trust in scientific and medical institutions is essential because health information from trusted sources are perceived as high quality.
As medical and health writers, this research demonstrates that Expertise and Authority, significantly influences the public’s perception of health information.
Edited by Holly Dear and Sarah Ferguson
Featured image credit: Sanket Mishravia Pexels
