Skip to main content

Using infographics to improve trust in science: a randomized pilot test

Abstract

Objective

This study describes the iterative process of selecting an infographic for use in a large, randomized trial related to trust in science, COVID-19 misinformation, and behavioral intentions for non-pharmaceutical prevenive behaviors. Five separate concepts were developed based on underlying subcomponents of ‘trust in science and scientists’ and were turned into infographics by media experts and digital artists. Study participants (n = 100) were recruited from Amazon’s Mechanical Turk and randomized to five different arms. Each arm viewed a different infographic and provided both quantitative (narrative believability scale and trust in science and scientists inventory) and qualitative data to assist the research team in identifying the infographic most likely to be successful in a larger study.

Results

Data indicated that all infographics were perceived to be believable, with means ranging from 5.27 to 5.97 on a scale from one to seven. No iatrogenic outcomes were observed for within-group changes in trust in science. Given equivocal believability outcomes, and after examining confidence intervals for data on trust in science and then the qualitative responses, we selected infographic 3, which addressed issues of credibility and consensus by illustrating changing narratives on butter and margarine, as the best candidate for use in the full study.

Introduction

Misinformation about coronavirus disease 2019 (COVID-19) has spread widely, pervasively, and rapidly following the emergence of the disease [1,2,3]. The nature of this misinformation has ranged from clearly conspiratorial and misinformed, such as the idea that 5G cell towers spread COVID-19, to conceptually possible but implausible narratives about the origins of the disease and motivations underlying preventive public health efforts [4]. These narratives can spread very quickly [5] and have been associated, directly and indirectly, with harmful outcomes [6,7,8] as well as reduced personal wellness [9].

Prevention of COVID-19 misinformation uptake, as well as public health misinformation in general, is an important, but complex, area of research. For example, efforts to “fact check” or restrict access to misinformed narratives risk being counterproductive [10]. In addition, ethical concerns reasonably can be raised regarding attempts to restrict access to public speech. An alternative approach, often described as inoculation theory [11], focuses on interventions occurring prior to exposure to new misinformation. Such approaches have been used, for example, in addressing anti-vaccination narratives [12].

Based on recent studies [13, 14], our research team is currently investigating the potential for an intervention designed to improve public trust in science and scientists to serve as a possible approach for easily disseminated misinformation prophylaxis [15]. Specifically, we have proposed a randomized, controlled superiority trial comparing an infographic about the scientific process to a placebo infographic in terms of trust in science and scientists, reported believability of misinformed narratives about COVID-19, and behavioral intentions to engage in Centers for Disease Control and Prevention (CDC)-recommended prevention behaviors [15]. Part of the study protocol involves iterative design and selection of a single infographic from among multiple alternatives to be used in the primary trial. This Research Note describes the preliminary work and pilot test.

Main text

Infographic design

The infographics used in this pilot study were first conceptualized as text-only messaging based on our interpretation of underlying principles of trust in science as described by Nadelson et al. [16]. These included: (a) credibility and consensus, (b) epistemology, (c) trustworthiness, (d) stereotypes of scientists/“scientist-as-person,” and (e) science as methodology, not field. These ideas were workshopped extensively among the study team for clarity, and written descriptions of potential visual components were also recorded alongside each narrative.

As indicated in the protocol [15], these narratives were informally discussed within the authors’ nonscientific social networks. This feedback was discussed among the researchers and was used to make decisions about both the written and visual elements of the infographics. For example, non-scientists uniformly rejected statements beginning with “All scientists…” preferring instead the more guarded “Most scientists…” They also encouraged linking visuals to commonly discussed science, like the Space X program; we also observed that we should avoid politically controversial topics such as climate change in designing our infographics.

Our written descriptions were then presented in a meeting with a subcontracted graphics design team at Indiana University. That team prepared a set of five infographics and our research team reviewed the images, collectively made suggestions, and then the graphics design team modified the infographics accordingly (see Additional files 3, 4, 5, and 6). Though infographics had core themes, there was considerable overlap given the conceptual complexity of trust.

  • Infographic 1: evolution in cigarette smoking recommendations (trustworthiness).

  • Infographic 2: space X engineer putting on pants in the morning (scientist-as-person).

  • Infographic 3: changing recommendations about butter/margarine (credibility/consensus).

  • Infographic 4: John Snow and cholera (science as methodology).

  • Infographic 5: relying on a weather forecast (epistemology).

Pilot test methods

Data collection

The procedure for the pilot test was outlined in the published study protocol [15].

Data were obtained on December 19, 2020 from a sample of 100 US-based Amazon Mechanical Turk (mTurk) users ages 18 and older (individuals must be age 18+ to enroll as a mTurk worker). To ensure data quality, minimum qualifications were specified to initiate the survey (task approval rating > 99%, successful completion of more than 100, but fewer than 10,000 tasks, US-based IP address). Checks were embedded in the first part of the survey to control for dishonest workers, survey response bots or virtual private network users, and inattentive participants [17, 18]. Failing at these checkpoints resulted in the termination of the task and exclusion from the study, and participants were warned of this possibility on the study information page. Participants who successfully completed the study were compensated $0.61 USD. In the process of collecting 100 responses from workers, one additional worker refused consent, and 48 additional workers began the survey but were excluded prior to randomization for failing a quality check.

Procedures and instrument

Eligible workers completed the trust in science inventory, which consists of 21 Likert-type items yielding a mean score from 1 (low trust) to 5 (high trust) [16] and then were randomized with equal allocation to view one of the five infographics (= 20 per arm, though due to simultaneous survey participation, infographic 4 had 21 workers and infographic 5 had 19). Participants were required to pause for at least one minute while viewing the infographic to loosely replicate uptake from multiple, but much shorter, exposures that would occur through social media. After viewing the infographic, workers were asked a qualitative question about the infographic’s meaning [19] and then were asked to complete a modified version of the narrative believability scale (nbs-12), which consists of 12 Likert-type items that produce a mean score from 1 (low believability) to 7 (high believability) [20]. Finally, workers completed the trust in science inventory a second time.

Analysis plan

Mean changes in trust in science between pretest and posttest were analyzed separately for each infographic using paired sample t-tests with unadjusted alpha set at .05. Differences in narrative believability between the five infographics were assessed using a one-way between-subjects analysis of variance (ANOVA), with Tukey’s HSD selected as a post-hoc test if the main effect was significant. All analyses were completed in SPSS v27.

Qualitative data were interpreted using a general inductive approach [21], with a primary focus on whether the participant described the infographic in such a way that it was clear that they understood the intended meaning.

Pilot test results

Trust in science

The trust in science inventory was reliable at pretest (α = 0.940) and posttest (α = 0.946) for the full sample. The mean level of trust at pretest was 3.79 (SD = 0.67), and at posttest was 3.86 (SD = 0.66).

All study arms reported higher trust in science at posttest than at pretest (ranging from mean differences of 0.03 to 0.12, see Table 1), but only one within-arm difference was statistically significant (infographic 4, t(20)=− 2.11, = 0.048, 95% CI of Diff: 0.001 to 0.239).

Table 1 Pretest-posttest comparison of trust in science scores

Narrative believability

The nbs-12 instrument was reliable for the full sample (α = 0.916). Each of the infographics had reasonably high believability, ranging from a low of 5.27 for infographic 1 to a high of 5.97 for infographic 2 (see Table 2). A one-way between-subjects ANOVA did not indicate any significant differences in mean narrative believability by infographic (F(4, 95) = 1.71, = 0.154).

Table 2 Narrative believability scores

Qualitative results

The infographics were designed to convey specific meanings related to subconstructs of trust in science. Thus, responses where participants described the infographic in a way that reflected the message that we intended to communicate were marked as being ‘consistent,’ and those that shared other messages were marked as ‘inconsistent.’ In total, 25 of the 100 responses were determined to be inconsistent, split among: infographic 1 (= 10), infographic 2 (= 5), infographic 3 (= 3), infographic 4 (= 6), and infographic 5 (= 1).

Exemplars of consistent and inconsistent responses, by infographic, are available in Table 3. In some cases, participants focused on the image itself rather than the meaning. For example, the intended emphasis of infographic 1 was on how doctors’ recommendations about cigarette smoking evolved due to new scientific evidence, but multiple people conflated scientists and doctors and focused on the medical recommendation rather than the reason for it. In other cases, it appeared that the narrative message was unclear, and some workers were not sure how to interpret an infographic. “I’m honestly not really sure what it was trying to communicate…” Finally, in a few cases, participants addressed the infographic’s meaning, but derived context or content outside of what we intended to communicate. This was observed most notably for infographic 4, which depicted John Snow and cholera: “The scientist came up with a theory and used a silly observation of what was seen to prove a theory, to which the scientific community agreed because of the scientist’s identity. Basically, it was an appeal to authority.” While the respondent identified the actors as scientists, they interpreted the message as implying that scientists must be trusted because they are scientists, and not because they have provided evidence to support their claim (which was the opposite of the intended message).

Table 3 Exemplars of qualitative data

The other 75 responses reflected at least partial understanding of our intended message without any additional unintended content (see Table 3).

Pilot test discussion

The primary purpose of the pilot test was to assist our research team in selecting an infographic to use as part of a larger randomized trial [15]. As prespecified, no single analysis (change in trust, narrative believability) was to be interpreted as a sole means of determining which infographic to select for the upcoming trial. Further, the quantitative results were to be interpreted in tandem with the qualitative data.

Quantitative

The pilot study was not powered to test for significant differences at pre- and post-test for the different infographics (and even then, within-group changes such as those we presented do not allow for inference of causality). Instead, our goal with those items was to check for any signs of potential iatrogenic changes (e.g., trust scores decreasing from pre- to post-test), which we did not observe, and to examine general trends.

The narrative believability score was not significantly different across arms—on a scale ranging from one to seven, the range of means was narrow, from 5.27 to 5.97, indicating generally good believability. Those scores, along with subscale variability (not shown, but available via the data and syntax), were conceptually consistent with other research on narrative believability [22]. Thus, no infographics were inherently eliminated from consideration due to the quantitative data alone.

Qualitative

For multiple infographics (#1, 2, and 4), when respondents were asked to describe the meaning of the infographics in their own words, between 5 and 10 of participants veered away from the messages we were attempting to communicate. As a result, the qualitative evidence was weighted in support of infographics 3 and 5, for which most responses (17 and 18 descriptions, respectively) indicated that we successfully communicated our message (see Additional files 3, 4, 5, and 6).

On closer examination, though, we wondered whether the presence in infographic 5 of additional text, relative to other infographics, may have inflated the prevalence of consistent descriptions for that infographic. Because epistemology (the conceptual target of the infographic 5) is a complex concept, we felt that this extra text was necessary. However, some responses describing infographic 5 that were classified as ‘consistent’ contained direct restatements of provided text. As a result, it was unclear to us whether the high frequency of accurate restatement reflected an understanding of the message in the infographic or rote repetition of written text.

Conclusions

The quantitative data didn’t strongly make the case for any specific infographic, and the infographic with arguably the “best” quantitative case (infographic 4) also appeared to create uncertainty and even oppositional interpretation. Infographics 3 and 5 both performed well qualitatively, but we were somewhat concerned that infographic 5’s qualitative performance may have been artificially high. As a result, we made the difficult decision to adopt infographic 3 for our larger study, though we note that a case could be made for infographics 4 and 5 as well, and encourage research and exploration of those files, which we have released alongside this note.

Limitations

This cross-sectional pilot study was intended to select an infographic to be used as part of a larger randomized trial. As such, it was designed to provide exploratory insight into five infographics, but not to draw widely generalizable conclusions. Further, the breadth of infographics that we designed and tested was limited by our own belief that the messages should be trustworthy—that is, regardless of the goals of the study, our intention was that the messages communicated by the infographics should be things that are true, even if the possibility exists that exaggerated claims could produce larger effects.

Availability of data and materials

Data generated during this pilot study as well as the analytic code are available as supplemental files alongside this manuscript (see Additional file 1 and Additional file 2). All infographics except the one selected for use in the subsequent randomized controlled trial are also available as supplemental files (see Additional files 3, 4, 5, and 6).

Abbreviations

ANOVA:

Analysis of variance

CDC:

Centers for Disease Control and Prevention

COVID-19:

Coronavirus disease 2019

mTurk:

Amazon Mechanical Turk

nbs-12:

Narrative believability scale-12

References

  1. Mian A, Khan S. Coronavirus: The spread of misinformation. BMC Med. 2020;18:89.

    Article  CAS  Google Scholar 

  2. Kouzy R, Jaoude JA, Kraitem A, Alam MBE, Karam B, Adib E, et al. Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter. Cureus. 2020;12(3):e7255.

    PubMed  PubMed Central  Google Scholar 

  3. Brennen JS, Simon FM, Howard PN, Nielsen RK. Types, sources, and claims of COVID-19 misinformation. RISJ. 2020;7:3.

    Google Scholar 

  4. Lynas M. COVID: Top 10 current conspiarcy theories (2020) https://allianceforscience.cornell.edu/blog/2020/04/covid-top-10-current-conspiracy-theories/. Accessed 22 May 2021

  5. Cinelli M, Quattrociocchi W, Galeazzi A, Valensise CM, Brugnoli E, Schidt AL, et al. The COVID-19 social media infodemic. Sci Rep. 2020;10:16598.

    Article  CAS  Google Scholar 

  6. Reichert C. 5G coronavirus conspiracy theory leads to 77 mobile towers burned in UK, report says. CNet Health and Wellness. https://www.cnet.com/health/5g-coronavirus-conspiracy-theory-sees-77-mobile-towers-burned-report-says/.(2020) Accessed 24 May 2020

  7. Parker B. How a tech NGO got sucked into a COVID-19 conspiracy theory. The New Humanitarian. https://www.thenewhumanitarian.org/news/2020/04/15/id2020-coronavirus-vaccine-misinformation. (2020) Accessed 24 May 2020

  8. Enders AM, Uscinski JE, Klofstad C, Stoler J. The different forms of COVID-19 misinformation and their consequences. HKS Misinfo Rev. 2020;1(8):1–21.

    Google Scholar 

  9. Sallam M, Dababseh D, Yaseen A, Al-Haidar A, Taim D, Eid H, et al. COVID-19 misinformation: mere harmless delucions or much more? A knowledge and attitude cross-sectional study among the general public residing in Jordan. PLoS One. 2020;15(12):e0243264.

    Article  CAS  Google Scholar 

  10. Krause NM, Freiling I, Beets B, Brossard D. Fact-checking as risk communication: the multi-layered risk of misinformation in times of COVID-19. J Risk Res. 2020. https://0-doi-org.brum.beds.ac.uk/10.1080/13669877.2020.1756385.

    Article  Google Scholar 

  11. Banas JA, Rains SA. A meta-analysis of research on inoculation theory. Commun Monogr. 2010;77(3):281–311.

    Article  Google Scholar 

  12. Jolley D, Douglas KM. Prevention is better than cure: addressing anti-vaccine conspiracy theories. J Appl Soc Psychol. 2017;47(8):459–69.

    Article  Google Scholar 

  13. Agley J. Assessing changes in US public trust in science amid the Covid-19 pandemic. Public Health. 2020;183:122–5.

    Article  Google Scholar 

  14. Agley J, Xiao Y. Misinformation about COVID-19: evidence for differential latent profiles and a strong association with trust in science. BMC Public Health. 2021;21:89.

    Article  CAS  Google Scholar 

  15. Agley J, Xiao Y, Thompson EE, Golzarri-Arroyo L. COVID-19 misinformation prophylaxis: protocol for a randomized trial of a brief informational intervention. JMIR Res Protoc. 2020;9(12):e24383.

    Article  Google Scholar 

  16. Nadelson L, Jorcyk C, Yang D, Smith MJ, Matson S, Cornell K, et al. I just don’t trust them: the development and validation of an assessment instrument to measure trust in science and scientists. Sch Sci Math. 2014;114(2):76–86.

    Article  Google Scholar 

  17. Keith MG, Tay L, Harms PD. Systems perspective of Amazon Mechanical Turk for organizational research: review and recommendations. Front Psychol. 2017;8:1359.

    Article  Google Scholar 

  18. Kim HS, Hodgins DC. Are you for real? Maximizing participant eligibility on Amazon’s Mechanical Turk. Addiction. 2020. https://0-doi-org.brum.beds.ac.uk/10.1111/add.15065.

    Article  PubMed  Google Scholar 

  19. Dobos AR, Orthia LA, Lamberts R. Does a picture tell a thousand words? The uses of digitally produced, multimodal pictures for communicating information about Alzheimer’s disease. Public Underst Sci. 2015;24(6):712–30.

    Article  Google Scholar 

  20. Yale RN. Measuring narrative believability: development and validation of the narrative believability scale (nbs-12). J Commun. 2013;63:578–99.

    Article  Google Scholar 

  21. Thomas DR. A general inductive approach for analyzing qualitative evaluation data. Am J Eval. 2006;27(2):237–46.

    Article  Google Scholar 

  22. Jensen JD, Yale RN, Krakow M, John KK, King AJ. Theorizing foreshadowed death narratives: examining the impact of character death on narrative processing and skin self-exam intentions. J Health Commun. 2017;22(1):84–93.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Ms. Amanda Goehlert for her work as a digital artist and design expert.

Funding

This study was made possible with support from the Indiana Clinical and Translational Sciences Institute funded, in part by Award Number UL1TR002529 from the National Institutes of Health, National Center for Advancing Translational Sciences, Clinical and Translational Sciences Award. The contents of this manuscript are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and Affiliations

Authors

Contributions

All authors assisted with development of the pilot study. JA conducted the study, wrote the manuscript, and performed initial analyses. YX, ET, and LG reviewed the analyses. All authors revised the final draft of the manuscript. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Jon Agley.

Ethics declarations

Ethics approval and consent to participate

All participants digitally provided informed consent prior to participating per the protocol approved by the Indiana University IRB (#2008571490).

Consent for publication

Not applicable.

Competing interests

No competing interests pertinent to the content of this manuscript exist.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional files 34, 5, and 6 are infographics used in this study. The concepts for each of the infographics were developed by the research team. The infographics were produced by Ms. Amanda Goehlert, a Designer on the Creative Team at Indiana University Studios. These images are considered part of this article for the purposes of licensing and use.

Additional file 1:

COVID19 Pilot Test Data Raw.sav. Raw study dataset in SPSS (v. 27) format.

Additional file 2:

Covid19 Pilot Test Syntax.sps. Analysis syntax used for the study (SPSS v. 27).

Additional file 3:

Research illustrations_concept 1.jpg. Infographic 1 from Arm 1 of the study.

Additional file 4:

Research illustrations_concept 2.jpg. Infographic 2 from Arm 2 of the study.

Additional file 5:

Research illustrations_concept 4.jpg. Infographic 4 from Arm 4 of the study.

Additional file 6:

Research illustrations_concept 5.jpg. Infographic 5 from Arm 5 of the study.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agley, J., Xiao, Y., Thompson, E.E. et al. Using infographics to improve trust in science: a randomized pilot test. BMC Res Notes 14, 210 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s13104-021-05626-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13104-021-05626-4

Keywords