Medical professional in a lab coat points to the computer screen while a patient lying on a table beside the computer looks at the screen. 

“Trust me, I’m a doctor”: What makes people trust scientists

Paper Title: Value-Based Narratives Foster Trust in Scientists and Communication Behaviors

Author(s) and Year: Janet Z. Yang, Laura Arpan, Yotam Ophir, and Prerna Shah, 2025

Journal: SAGE Science Communication (closed access)

TL;DR: To test whether different narrative frames influenced public trust in scientists, researchers used story narratives that portrayed scientists either being motivated by benevolence or by achievement  Ultimately, this study found that people were more likely to trust narratives that highlighted the scientist’s altruism rather than credentials.

Why I chose this paper:  The idea of fostering trust in public audiences drew me to this paper. As someone who works at a university, I feel like we are taught to emphasize our credentials (degrees, research experience, publications) to other researchers instead of the emotional motivations behind our research interest. Therefore, when scientists talk about their research to the public, they tend to highlight their education rather than their motivation to help people. I think this diminishes the ability for folks to connect with the researchers.

If you go to a doctor’s office to discuss your test results, how comfortable would you be if the doctor told you they used AI to analyze your results? Based on your background and beliefs about AI, you might be either horrified or assured. In 2024, a poll by YouGov found that 48% of respondents were concerned that AI computers would harm humans in the future. While almost half of the public is concerned about AI, scientists are increasingly using AI in research, particularly for medical diagnosis. This is not just scientists putting data into ChatGPT; scientists train AI models to identify complex medical diseases such as chronic fatigue syndrome. This raises the question of whether different communication strategies might influence public trust in scientists. 

The Background

Dr. Yang and her team wanted to understand the elements of trust. They realized that to understand what drives trust in an audience, values need to be considered. Values are beliefs that guide an individual’s behavior and others’ actions. These values can vary from self-enhancement (think achievement) to self-transcendence (altruism and benevolence). If a person values achievements and credentials, then listening to a scientist with a degree from a prominent university would mean more to them than listening to a scientist talk about how they want to make people’s lives healthier. To study this topic, the researchers first noted that storytelling through narratives is a powerful way to connect with the audience and promote empathy. With the elements of trust, values, and stories in mind, the researchers wanted to understand which value-based narratives can change the public’s perception of scientists. In addition, they wanted to see if political ideology influenced how people responded to each narrative. Thus, the following hypotheses were tested:

Hypothesis 1 (H1): Participants would have higher levels of trust perception in competence after reading the self-enhancement narrative

Hypothesis 2 (H2): Participants would have higher levels of trust perception in benevolence after reading the self-transcendence narrative

Hypotheses 3 (H3) and 4 (H4): How the participants identified with the story and the scientist would be related to how they felt that AI medical diagnostics would benefit them.

Hypothesis 5 (H5): The more they thought AI would benefit society, the more likely they would talk about AI with other people. 

The Methods

The researchers recruited participants online through ResearchMatch. After making the participants take an attention check, a total of 694 participants were included in the study. Most of the participants self-identified as white, women, and liberal.

Each participant read a story about a scientist who wanted to use AI to help identify rare diseases that can easily be missed, and told a story of how they used it to help a sick child with a rare disease. The researchers split the participants into three groups that read separate narratives. The first group read a narrative with self-enhancement value, which emphasized achievements and power. The second group read a narrative with self-transcendence value, which emphasized altruism and benevolence. The third group read a narrative with no specific value. After reading the narratives, the researchers gave each participant a survey that looked at the following measurements and analyzed their responses using structural equation modeling.

Trust perception: A rating of how much the participants trust the scientist in the different dimensions of trust previously mentioned in the literature. The researchers also identified four dimensions of trust: competence, benevolence, integrity, and general trustworthiness of the narrator.

Transportation: Measured how the participants could see themselves in the story.

Identification: Measured how the participants “identified” with the scientist. For example, one statement on the question said, “I was able to understand the events in the story in a manner similar to how the scientist understood them.”

Story-Consistent beliefs about AI diagnostics: Measured the participants’ beliefs about AI diagnosis and how it could benefit society.

Behavioral Intention: Measure how willing the participants would be to talk about AI with their loved ones or share information

The Results

The narrative with self-enhancement only (emphasizing achievement) increased perceptions of trust in competence (H1). When assessing trust, the researchers found that the narrative emphasizing self-transcendence (emphasizing altruism) increased trust perception across all four dimensions (competence, benevolence, integrity, and general trustworthiness) (H2).

They also found that the more the participants rated the scientist as benevolent, the more they were transported into the story and identified with the scientist) The more the participants identified with the narrative, the more highly they believed that AI medical diagnostics could benefit society. Those who had more story-consistent beliefs were more likely to talk about the use of AI in medical diagnostics. They also found that identification was more important for narrative effect than transportation. Interestingly, the researchers found that the self-transcendence narrative had an impact on identification among liberals and moderates, but not conservatives. 

The Impact

The researchers showed the power of value-based narratives in shaping trust in medical scientists. The researchers mentioned that the sample was biased since most participants identified as white, women, and liberal. Therefore, more work is needed to understand how these narratives impact a sample more representative of the American public. It is important to note that this study was focused on how to communicate about AI research, so results may differ for other research topics. However, this can give science communicators an understanding of which messages resonate with people. Many researchers are called to science out of a desire to help people, which is an important motivation to emphasize. Overall, scientists should consider highlighting their altruistic motives behind their research, rather than focusing solely on their credentials.

Written by Julianna Goenaga

Edited by [editor(s) name(s)] Crystal Koralis Colón Ortiz, Alex Music

Featured image credit: Image by fernando zhiminaicela from Pixabay