Published on in Vol 3 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/50138, first published .
Addressing Antivaccine Sentiment on Public Social Media Forums Through Web-Based Conversations Based on Motivational Interviewing Techniques: Observational Study

Addressing Antivaccine Sentiment on Public Social Media Forums Through Web-Based Conversations Based on Motivational Interviewing Techniques: Observational Study

Addressing Antivaccine Sentiment on Public Social Media Forums Through Web-Based Conversations Based on Motivational Interviewing Techniques: Observational Study

Original Paper

1Weill Cornell Medicine, New York City, NY, United States

2Critica, Bronx, NY, United States

3Tuck School of Business, Dartmouth College, Hannover, NH, United States

Corresponding Author:

David Scales, MPhil, MD, PhD

Weill Cornell Medicine

525 East 68th Street

Box 331

New York City, NY, 10065

United States

Phone: 1 2127464071

Email: das9289@med.cornell.edu


Background: Health misinformation shared on social media can have negative health consequences; yet, there is a dearth of field research testing interventions to address health misinformation in real time, digitally, and in situ on social media.

Objective: We describe a field study of a pilot program of “infodemiologists” trained with evidence-informed intervention techniques heavily influenced by principles of motivational interviewing. Here we provide a detailed description of the nature of infodemiologists’ interventions on posts sharing misinformation about COVID-19 vaccines, present an initial evaluation framework for such field research, and use available engagement metrics to quantify the impact of these in-group messengers on the web-based threads on which they are intervening.

Methods: We monitored Facebook (Meta Platforms, Inc) profiles of news organizations marketing to 3 geographic regions (Newark, New Jersey; Chicago, Illinois; and central Texas). Between December 2020 and April 2021, infodemiologists intervened in 145 Facebook news posts that generated comments containing either false or misleading information about vaccines or overt antivaccine sentiment. Engagement (emojis plus replies) data were collected on Facebook news posts, the initial comment containing misinformation (level 1 comment), and the infodemiologist’s reply (level 2 reply comment). A comparison-group evaluation design was used, with numbers of replies, emoji reactions, and engagements for level 1 comments compared with the median metrics of matched comments using the Wilcoxon signed rank test. Level 2 reply comments (intervention) were also benchmarked against the corresponding metric of matched reply comments (control) using the Wilcoxon signed rank test (paired at the level 1 comment level). Infodemiologists’ level 2 reply comments (intervention) and matched reply comments (control) were further compared using 3 Poisson regression models.

Results: In total, 145 interventions were conducted on 132 Facebook news posts. The level 1 comments received a median of 3 replies, 3 reactions, and 7 engagements. The matched comments received a median of 1.5 (median of IQRs 3.75) engagements. Infodemiologists made 322 level 2 reply comments, precipitating 189 emoji reactions and a median of 0.5 (median of IQRs IQR 0) engagements. The matched reply comments received a median of 1 (median of IQRs 2.5) engagement. Compared to matched comments, level 1 comments received more replies, emoji reactions, and engagements. Compared to matched reply comments, level 2 reply comments received fewer and narrower ranges of replies, reactions, and engagements, except for the median comparison for replies.

Conclusions: Overall, empathy-first communication strategies based on motivational interviewing garnered less engagement relative to matched controls. One possible explanation is that our interventions quieted contentious, misinformation-laden threads about vaccines on social media. This work reinforces research on accuracy nudges and cyberbullying interventions that also reduce engagement. More research leveraging field studies of real-time interventions is needed, yet data transparency by technology platforms will be essential to facilitate such experiments.

JMIR Infodemiology 2023;3:e50138

doi:10.2196/50138

Keywords



Extensive research has shown that health misinformation has real, negative consequences. It can influence people to hold misperceptions and adopt unhealthy behaviors [1-8]. This led the US Surgeon General to issue a special advisory on the topic [9], in which health misinformation was defined as “information that is false, inaccurate, or misleading according to the best available evidence at the time” [10,11]. We also know that although supplying facts is often necessary to counteract misinformation, it is usually not sufficient to change opinions or behavior [12,13]. Belief in misinformation can be deeply ingrained, reinforced by psychological and social pressures, and difficult to dislodge [14]. This is especially the case with information on the internet and social media presented in misleading contexts and subjected to repeated sharing, reposting, and commenting. Some such information, whether true or false, can be spread with the intent to deliberately create misperceptions or sway public opinion [15].

From the onset of the COVID-19 pandemic, the World Health Organization (WHO) described the situation of a parallel “infodemic” [16], defined as “excess information, including false or misleading information, in digital and physical environments during an acute public health event” [17]. This infodemic focus has renewed interest in “infodemiology,” the epidemiological study of these digitally enabled flows of information [18], and the need for professionals equipped to assess and respond to misinformation of public health importance as a core function of public health [19,20]. While an epidemic metaphor has its limits and externalities [21], it offers a framework with which to marshal resources toward understanding and mitigating the problem. Put another way, the lens of an infodemic suggests the need to develop field epidemiologists to deploy in public health and infodemic emergencies for rapid support of public health communications and interventions [22].

While public health institutions such as the Centers for Disease Control and Prevention (CDC) or the WHO issue messages on social media, these public health broadcasts are often at the periphery of web-based discussions about vaccines, Ebola, and the Zika virus [23-25]. Similarly, during the COVID-19 pandemic, information with public health relevance was decreasingly reliant on top-down recommendations from doctors and public health institutions (eg, the CDC) and more reliant on socially contextualized, decentralized, interpersonal, horizontal, and networked communication like that found on social media [26]. In contrast, antivaccine advocates are often leveraging the affordances of digital platforms to communicate in a coordinated, networked fashion [25,27]. We therefore hypothesized that best practice public health recommendations would not speak for themselves but would require trusted, community-linked advocates to communicate and interpret them within the value frames of those networks. We therefore developed a protocol based on principles of motivational interviewing and other evidence-based approaches, including inoculation, use of narratives, and promoting critical thinking, to address misinformation in web-based contexts and used it to intervene on Facebook (Meta Platforms, Inc) when misinformation about COVID-19 vaccines appeared [22].

Our approach is based on a menu of tactics derived from 3 main strategies: assessing how receptive the person posting health misinformation may be to an intervention; increasing high-quality, science-based messages across the web-based communication network; and reducing misinformation across that network. A critical principle underlying the protocol is derived from motivational interviewing (MI) techniques [28], which have shown efficacy in addressing vaccine hesitancy [29]. Using MI principles in this setting meant that the interventionist attempted to establish common ground with the person who posted misinformation and expressed empathy and an interest in understanding their point of view before responding directly to misinformed comments. Open-ended questioning and reflective listening in the spirit of MI are used throughout. Work to fully adapt MI to this setting, which we term community-oriented motivational interviewing, is ongoing [30].

As the person posting misinformation on social media is often committed to the misinformed point of view and unlikely to be immediately persuaded to consider an alternative perspective, the infodemiologists also consider the perspective of “bystanders” to the conversation, those observing but not necessarily engaging or commenting [31]. Such “bystanders” are hypothesized to be part of the “moveable middle” [32] and more persuadable about issues such as COVID-19 vaccines than the initial commentator. As misinformation can be perceived as true through repetition, infodemiologists seek to disrupt that “illusory truth effect” [33] while also role modeling how community members can make decisions commensurate with their values despite scientific uncertainty.

The purpose of this study is to provide a detailed description of infodemiologists’ interventions. We also present an outline of an initial evaluation framework for such work, highlighting major gaps in the lack of accessibility of social media data that hinder researchers’ ability to tie their work to more concrete outcomes, like behaviors. Finally, we quantify the impact these in-group messengers have on the web-based threads on which they are intervening.


Overview

Infodemiologists were drawn from the communities in which they intervene to help ensure trust through shared identity and values [34-37]. Details on the recruitment, credentials, training, and supervision of the 4 infodemiologists involved in this report have been published previously, along with descriptions of the intervention process [22,30]. Briefly, infodemiologists underwent a skills-based training consisting of practice interventions and weekly supervision sessions with one of the authors (DS or JMG) for feedback, totaling approximately 20-30 hours of training, practice, and supervision. They were first assigned independent reading to provide guidance on the evidence behind different communication techniques and then conducted a series of web-based training interventions with supervision, reflection, and feedback with DS and JMG [13,38,39]. The instructions on how to conduct interventions were broad, emphasizing that they needed to be tailored to the context. After the initial training, supervision continued at weekly group reflection sessions with all infodemiologists throughout the course of the study. All infodemiologist interventions were included in data collection and analysis, and none were excluded. A total of 145 pilot interventions were conducted between December 2020 and April 2021.

Full details on our misinformation monitoring and identification process are available from Gorman and Scales [22] and Scales et al [30]. In short, we monitored web-based Facebook profiles of news organizations marketing to 3 geographic regions: Newark, New Jersey; Chicago, Illinois; and central Texas. Regions were chosen for demographic, geographic, and urban or rural diversity. Infodemiologists were trained to select local media postings on Facebook that had generated comments containing either misinformation about vaccines or antivaccine sentiment within several hours of their posting. We defined misinformation about COVID-19 vaccines practically as any post that contained factually incorrect material or overtly negative sentiment about the vaccines, regardless of the motive of the person posting. This was a subjective assessment based on the infodemiologist’s perception of what could be considered negative from the perspective of their community. For more details on how threads were chosen for interventions, see [30]. Infodemiologists recorded deidentified transcripts of the conversations (including ancillary comments from bystanders) as well as native engagement metrics (likes, shares, etc). Information on matched comments and replies for benchmarking was collected later, but sensitivity analysis did not find significant changes in conversation metrics over time. Infodemiologists were supported in their work through a process of written reflection after each intervention, direct written feedback on their interventions, and weekly group supervision sessions. Moreover, to protect them from harassment, infodemiologists were instructed to exit conversations that became emotionally heated or where they felt unsafe. To minimize web-based harassment, infodemiologists used the Critica’s Facebook account and only identified themselves by their first names. The structure of the comments can be found in Figure 1, and an example of an intervention can be seen in Figure 2, paraphrased to protect the privacy of participants [40].

Figure 1. Visual description of comment and reply levels.

The infodemiologist intervened on a local news article from Texas describing how employers could require COVID-19 vaccines. To ensure the original post cannot be retroactively identified, the engagement and response numbers are rounded, and the transcript has been paraphrased.

In any given infodemiology session, infodemiologists deployed various evidence-informed communication techniques depending on the context (Table 1). Since there is little evidence to guide what communication technique should be used at any given time, we developed a 2-pronged approach to guide how and when to apply different intervention techniques. We engaged in discursive reflection among our team members, sometimes in real time through email or Slack (Slack Technologies) channels and at our weekly reflection meetings, to assess what techniques appeared to curtail conversations and engagement or promote reflection or resistance (ie, “change talk” or “sustain talk” per motivational interviewing language). Additionally, we paid particular attention to whether interventions elicited backfire effects, or psychological reactance, defined as escalating negative emotions through the course of an interaction with the infodemiologist [41]. Immediately after every initial infodemiologist intervention, they posted a disclaimer identifying themselves as researchers and a web link to further information about the research study, including options to request data be removed from our database. Of note, no requests for data removal were received. Table 1 provides a glimpse into the range of communication techniques that infodemiologists may use and the evidence behind them. It is not a comprehensive compilation of such techniques or the supporting evidence. A full review of this literature is beyond the scope of this study.

Table 1. Infodemiologist menu of techniques, protocols, and corresponding evidence.
Principle or approach and goalExplanation or exampleReference
Receptivity to finding misinformation credible

Infodemiologists should be from and within the communities and networks in which they will be interveningExample cues (language, register, and slang) as markers of in-group identity[35,42,43]

Assess readiness for changePrecontemplation, contemplation, preparation, action, and maintenance[44,45]

Apply relevant principles of motivational interviewingOpen-ended questions, affirmations, reflective listening, summarizing, and promoting self-efficacy[38,46,47]

Focus on the “fencers” or the “moveable middle”People with heavily committed beliefs are unlikely to change their views quickly.[31,32,48]

Promote critical thinking“Please tell me more about that? Maybe give an example?”[49]

Inoculation“Misinformation will use various methods to make you doubt vaccines, like saying, “vaccines will make you infertile forever!”[35,50-53]
Increase high-quality, science-based messages across the network

Targeting highly visible Facebook news sources with little comment moderationFacebook is often slow to implement its own misinformation takedown policies.[54-56]

Reframing negative comments according to in-group cultural values regarding uncertainty, freedom to choose, etc“I understand masks feel to you like they restrict freedom, but I’m proud to wear a mask with traditional designs that also helps protect our elders.”[57-59]

Use personal narratives or anecdotes“I was so relieved when I got vaccinated. I stopped worrying that I’d die if I got COVID”[60]

Respond as quickly as possible after comments are postedEnsures misinformation does not become entrenched or misinterpreted.[61,62]

Detailed rebuttals, if neededRebuttals without explanations are less effective.[49,63,64]

Link to sources likely to be trusted by commentersExample: not citing the CDCa or the FDAb as sources if stakeholders are antigovernment[34,61,65]

RepetitionPeople attribute more accuracy to repeated information.[66]
Reduce misinformation across the network

Reframing uncertainty as congruent with valuesUnknown risks psychologically loom large; reframing them around known benefits provides balance.[57,58]

Offering alternate, more plausible explanationsPlausible explanations accompanying warnings or rebuttals increase effectiveness.[67]

Appeal to expert consensus“97% of climate scientists agree that human-caused climate change is happening.”[35,68-70]

Appeal to accuracyAsking, “how accurate is that headline?”[71,72]

Recontextualizing information taken out of context“While true, these data are unverified user reports, not official statistics”[73]

To build an evaluation strategy, we used the reach, effectiveness, adoption, implementation, and maintenance (RE-AIM) framework from implementation science [74], which provided a strategic guide for ideal evaluation of infodemiology interventions. Originally designed to incentivize scientists to be transparent and reflect on internal and external validity across the continuum of translational research from pilot to effectiveness studies, the RE-AIM framework was chosen here for 2 main reasons. First, it is familiar, being one of the most widely used and cited implementation science frameworks. Second, it has been successfully adapted to multiple and diverse contexts, suggesting it could also be applied to the web-based setting in which this research was done [75].

While implementation science is often used for interventions whose efficacy and effectiveness have already been established and which require further intervention to ensure their uptake into a specific context of interest, we believe it offers useful frameworks for work in digital spaces to counter misinformation, even though the evidence base is still emerging. In that context, we note that the communication techniques listed in Table 1 have demonstrated varying degrees of efficacy, and not all have demonstrated effectiveness in real-world, web-based settings. However, due to the constantly changing nature of web-based platforms, we recognized that effectiveness data either would not be forthcoming quickly or would essentially be outdated by publication due to platform changes (algorithmic, graphical user interface, or other). We therefore took the approach outlined by Lane-Fall et al [76] that hybrid implementation-effectiveness studies may be most appropriate when the urgency of the situation requires it, coupled with strong indirect evidence of potential effectiveness in the context of interest. In this study, we focus on the effectiveness results. Refer to Gorman and Scales [22] for an implementation discussion.

We collected data on the Facebook news post, the initial comment containing misinformation, which we will refer to as a “level 1 comment,” and the infodemiologist reply comment, which is the start of the intervention, referred to as a “level 2 reply comment” (Figures 1 and 2 contain visual descriptions and an example of different comments and replies described here). Facebook organizes comments into threads, with level 2 comments branching off level 1 comments. Engagement was defined as the total number of comments and emoji reactions (like, love, hug, mad, haha, wow, and sad). In the context of this topic, emoji reactions “like,” “love,” and “hug” are interpreted as positive reactions, and “mad” and “haha” are negative reactions, with the latter interpreted as sarcasm. “Wow” and “sad” are considered neutral reactions.

Figure 2. Example transcript of an infodemiologist intervention on a local news article from Texas describing how employers could require COVID-19 vaccines. To prevent retroactive identification, engagement and response numbers are rounded and the transcript paraphrased.

We implemented an innovative comparison-group evaluation design, building upon existing designs that measure engagement without a comparative benchmark [77-79]. Specifically, we collected data on comparison comments and replies adjacent to the intervention threads. Comments gathered for benchmarking were the five level 1 comments immediately above and below the level 1 comment to which the infodemiologist replied at the time of subsequent data collection, which, due to Facebook algorithms that are not transparent, may have changed from the time of the intervention. In rare circumstances where the level 1 comment subject to an infodemiologist’s intervention could not be found (eg, absorbed into “Relevant” by Facebook), data were collected on 10 comments from the middle of the comments thread. Reply comments gathered for benchmarking were collected in the same way: the five level 2 comments above and below the infodemiologist’s level 2 reply comment.

Collecting the comments data for benchmarking allowed us to consider the question, “How does engagement on the level 1 comment the infodemiologist chose to respond to compare to the engagement on the typical level 1 comment on the same Facebook news post?” Benchmarking with reply comments data allowed us to compare the engagement the infodemiologist interventions (ie, level 2 reply comments) received to the engagement on the typical level 2 reply comment on the same Facebook news post. For each level 1 comment, we summarized the infodemiologists’ level 2 reply comments into 6 metrics: the median and IQR of the respective numbers of replies, emoji reactions, and engagements infodemiologists’ reply comments received. For comparative purposes, we also summarized the numbers of comments and reply comments and the corresponding 6 metrics for the comments used for benchmarking. The numbers of replies, emoji reactions, and engagements level 1 comments received were compared with the median metrics of matched comments using the Wilcoxon signed rank test. Each metric of infodemiologists’ level 2 reply comments (intervention) was benchmarked against the corresponding metric of matched reply comments (control) using the Wilcoxon signed rank test (paired at the level 1 comment level). The number of replies, emoji reactions, and engagements between infodemiologists’ level 2 reply comments (intervention) and matched reply comments (control) were further compared using 3 Poisson regression models: treating intervention as a fixed effect with the Huber-White robust SE estimates, as a random effect (nested within location), and as a random effect (nested within location) with the number of page followers/1,000,000 as an offset. The fixed effect design is particularly strong as an internal validity test, as it controls for any confounding, both observed and unobserved, across level 1 posts. For example, level 1 posts differed by geography, timing, and news organization. Limiting comparisons to level 2 intervention and treatment posts nested within the same level 1 post sweeps away any of these concerns. The tradeoff to fixed effects estimation is a relative loss of statistical precision and an inability to characterize level 1 influences; as such, we estimated random effects models as a robustness exercise. The significance level was set at =.05, and no correction was made for multiple testing as this was an exploratory and hypothesis-generating analysis.

Ethical Considerations

This study was deemed exempt from institutional review board (IRB) review by the Salus IRB (#2014) and approved by the Weill Cornell Medicine IRB (20-10022858). To comply with Facebook Terms of Service, all data were manually collected by infodemiologists and manually reviewed for accuracy during a follow-up assessment approximately 10 months later.


A total of 145 interventions were conducted on 132 Facebook news posts, of which 55 interventions (38%) focused on Illinois news sources, 8 (6%) in New Jersey, 45 (31%) in Texas, and 37 (26%) in other states or were nationally oriented. Two-thirds (93/145, 64.14%) of infodemiologist interventions precipitated some form of engagement (either an emoji reaction or a reply comment) from commenters or activated bystanders. In keeping with related literature, comment engagements were right-skewed. Accordingly, we calculated medians and IQRs. The Facebook page for the news organizations on which infodemiologist interventions (level 2 reply comments) were posted had a median of 915,860 (Q1-Q3 range 634,473-2,689,864) page followers. The Facebook news posts received a median of 19 (Q1-Q3 range 7-86) shares, 119 (Q1-Q3 range 37-352) comments, 190 (Q1-Q3 range 73-510) emoji reactions, and 354 (Q1-Q3 range 119-918) engagements, which are the sum of the numbers of comments and emoji reactions received (Table 2).

Table 2. Metrics for Facebook news page and Facebook news post.

Page followers, median (Q1-Q3)Shares, median (Q1-Q3)Comments, median (Q1-Q3)Reactions, median (Q1-Q3)Engagementsa, median (Q1-Q3)
Facebook Page915,860 (634,473-
2,689,864)
b
Facebook news post19 (7-86)119 (37-352)190 (73-510)354 (119-918)

aEngagements = comments + emoji reactions for Facebook news post.

bNot available.

The level 1 comments received a median of 3 replies, 3 reactions, and 7 engagements. The matched comments received a median of 0 replies with a median IQR of 0.75, 1 emoji reaction with a median IQR of 2, and therefore 1.5 engagements with a median IQR of 3.75. Compared to the matched comments, the level 1 comments received more replies, emoji reactions, and engagements (Table 3).

Table 3. Metrics for level 1 comment and matched comment.

Level 1 comment, median (Q1-Q3)Matched comment, median (Q1-Q3)P valuea
Replies

Median3 (2-7)0 (0-0.5)<.001

IQRb0.75 (0-2)
Emoji reactions

Median3 (1-6)1 (0-2)<.001

IQR2 (1-4)
Engagementsc

Median7 (3-12)1.5 (0.5-2.5)<.001

IQR3.75 (1.75-6)

aThe Wilcoxon signed rank test was used to examine the difference between level 1 comments and matched comments and revealed that level 1 comments received more replies, emoji reactions, and engagements.

bNot available.

cEngagements = replies + emoji reactions.

In total, infodemiologists made 322 level 2 reply comments, precipitating 189 emoji reactions, of which 151 (79.9%) were positive (141 like, 10 love, and 0 hug emojis), 37 (19.6%) were negative (37 haha and 0 mad emojis), and 1 (0.5%) was neutral (1 wow and 0 sad emojis). The level 2 reply comments received a median of 0 replies with a median IQR of 0, 0 emoji reactions with a median IQR of 0, and 0.5 engagements with a median IQR of 0. The matched reply comments received a median of 0 replies with a median IQR of 0.75, 0.5 emoji reactions with a median IQR of 1, and 1 engagement with a median IQR of 2.5. Compared to the matched reply comments, the level 2 reply comments received fewer and narrower ranges of replies, reactions, and engagements, except for the median comparison for replies (Table 4).

Table 4. Metrics for infodemiologist intervention (level 2 reply comment) and matched reply comment.

Level 2 reply comment, median (Q1-Q3)Matched reply comment, median (Q1-Q3)P valueaPoisson fixed-effects with robust SE estimatesPoisson random effectsPoisson random effects with the number of page followers/ 1,000,000 as an offset
Replies–0.98b–0.98b–0.99b

Median0 (0-0.5)0 (0-0.25).94



IQR0 (0-0.5)0.75 (0-1.75)<.001


Emoji reactions–1.19b–1.21b–1.20b

Median0 (0-0.5)0.5 (0-1)<.001



IQR0 (0-0.3125)1 (1-2.5)<.001


Engagementsc–1.10b–1.11b–1.10b

Median0.5 (0-1)1 (1-2)<.001



IQR0 (0-0.75)2.5 (1-4.25)<.001


aWilcoxon signed rank test.

bP<.001.

cEngagements = replies + emoji reactions.

The median number of individuals involved in conversation threads with the infodemiologist was 2 with a median IQR of 3. Qualitative evidence of psychological reactance or a backfire effect (assessed by observing if discussion with an infodemiologist appeared to immediately lead to a commenter leaving more extreme comments) was rare, appearing in 1% (2/145) of interventions.


The primary purpose of this research was to see if infodemiologist interventions would receive attention in web-based settings. We found that, by and large, they do; however, the evidence for such attention is limited due to Facebook’s data collection limitations, which require active engagement and do not provide data on the number of viewers that do not either comment or react using emojis. We also sought to develop a basis for assessing the extent of that attention by comparing engagement with our comments to engagement with comments made by others to the same post (matched comments). More specifically, on average, infodemiologists’ interventions (level 2 reply comments) received fewer replies and less overall sentiment (positive, neutral, or negative), as evidenced through native metrics such as “likes,” than matched reply comments. We found in this case that we had statistically significantly less engagement than the matched reply comments. Moreover, according to the IQR comparisons, we also observed that the infodemiologists’ comments received a narrower range of reactions, replies, and engagements than matched benchmark comments.

This could suggest that our impact was less than that of antivaccination comments, but another, more nuanced interpretation of our results, based on the context in which the interventions were made, is that our reply comments led to a quieting of the conversation rather than stimulating more antivaccination comments. On a highly charged political topic, such as the discourse surrounding COVID-19 vaccines, reducing engagement may be one effective way to reduce the spread of misinformation, even if the impact on participants’ and bystanders’ beliefs and behavior remains unclear. For example, numerous studies of accuracy nudges demonstrate such interventions reduce intentions to share misleading or false content [80-82].

We recognize that reduced engagement does not necessarily imply agreement. Moreover, there are several potential explanations for why infodemiologists comments received less engagement, including participant boredom, inattention, apathy, undetected or silent backfire effects, or algorithmic downregulation. However, Facebook data limitations preclude us from tracking the comments of either participants or bystanders after infodemiologist interventions to assess whether interventions changed their attitudes about vaccines or led to anyone’s decision to subsequently receive a vaccine.

While some engagement is useful or even necessary to algorithmically drive attention to infodemiologist interventions and therefore increase overall views, attempts to drive too much engagement—that is, through the strong emotional reactions of outrage or fear that may be required to drive such metrics—could be detrimental to the tenor of conversations infodemiologists are seeking to have. In addition, high degrees of engagement and the emotional valence such conversations are likely to bring may expose infodemiologists to other risks, such as harassment or doxing. The optimal amount of engagement that balances these 2 competing priorities is not clear.

Considering the optimal amount of engagement alludes to a larger issue about the metrics being used to assess such digital interventions overall. On social media, the metrics most convenient to use are designed for monitoring the impact of brand marketing [83,84]. Such metrics are not conducive to public health evaluations of the dynamics of misinformation [20]. Moreover, as has been previously noted, engagement itself does not necessarily align with efforts to prevent or mitigate the spread of misinformation about science and health. Efforts to stymie the production or spread of misinformation then face a strategic dilemma: maximize engagement through the native metrics made available by social media platforms (and incur the subsequent externalities) or engage in time- and labor-intensive practices of data collection to generate alternate metrics. The former approach implies that the solution to misinformation about science or medicine is simply to make science more engaging; however, as implied by Brandolini’s law that the time, effort, and cost of addressing falsehoods are orders of magnitude larger than the resources required to produce them [85], that approach ignores the diversity and ease with which misinformation about science spreads [86].

This study has several limitations. First and foremost are the limits on data accessibility that curtail efforts to fully understand the impact of infodemiology interventions on the digital information environment and other actors in this space. Indeed, various features of technology platforms’ algorithms and user interfaces made it challenging to maintain the same intervention strategy over time, follow ongoing interventions, and collect sufficient data at scale. This manifested as well in that Facebook approximates counts of comments and emojis once the number becomes large, leading to similar approximates in our reported data, especially for the number of page followers. Similarly, Facebook algorithms are constantly reordering comments in active threads. Therefore, the benchmarking data, which were collected after the original posting, may represent slightly different results than if the benchmarking were collected at the same time as the intervention itself. However, as the benchmarking is sufficiently broad, covering the comments both above and below the comment to which the infodemiologist replied, we believe it provides a representative “control” against which we can weigh the infodemiologists’ interventions.

Moreover, while third-party applications offer insight into “social media marketing metrics,” such services are structured to provide data to Facebook landing pages, not to assess engagement metrics of individuals doing multiple interventions across pages hosted by others. Specifically, because we did not own the pages on which we intervened, we could not access metrics on the views of those bystanders who neither commented nor provided emoji reactions through commercial software that permits such monitoring. Additionally, because engagement is defined as the combination of 2 nonmutually exclusive events (ie, emoji reactions and comments), it is possible that it is an overstatement of the number of people engaging in the discussion if someone both commented and expressed an emoji reaction.

Furthermore, it is impossible to separate the effect of the interventions completed by the infodemiologists from the potential impact of how they were viewed by other stakeholders in the discussion. For example, the perception that they may be trusted sources, researchers, or central nodes in a social network could influence how the interventions are received by participants. It is possible that infodemiologists were responding to, or the engagement numbers were inflated by, fake Facebook accounts. Infodemiologists sought to minimize this risk through the selection of which threads to intervene in and by examining a poster’s public profile. Additionally, this work does not address the myriad individual, community, and structural factors that lead to disparities in information sharing on social networks [87]. It also does not effectively address how the structure of the social network affects how health information diffuses [88]. Future work will seek to collect sufficient data to assess these potential confounders.

An additional challenge lies in how misinformation is defined. While our infodemiologists sought to respond to posts with factually incorrect material or overtly negative sentiment about the vaccines, those determinations were made based on context and therefore subjective. Moreover, the sheer volume of such material on social media meant infodemiologists could not respond to all such instances in their communities but only a subset.

Our sample was geographically skewed with relatively few New Jersey-focused interventions, we believe due to the overlap of New York and New Jersey media markets. Finally, our pilot prioritized infodemiologist safety and well-being both in the choice of the site of interventions (Facebook posts of news stories) and in the interventions themselves, instructing infodemiologists to avoid highly contentious forums and exit conversations if they felt threatened. It is likely that an explicit focus on engagement would lead to higher engagement statistics, though such approaches would need to be weighed against the externalities (eg, more contentious, emotionally laden content generating fear or outrage or the emotional safety or doxing of the infodemiologist).

Here we described an approach to addressing health-related misinformation derived from evidence on methods to intervene against misinformation about various topics in web-based and offline spaces that were associated with less engagement relative to comparable comments and replies in the same comment threads. More research building on some of the toolkits and frameworks presented here will be needed to further guide research on addressing misinformation in digital communities. We attempted to show that the RE-AIM framework is an effective schema to guide evaluations in this space, even if direct evidence of real-world efficacy is lacking. However, the gap between ideal evaluation metrics and the available data through social media platforms remains wide.

With this in mind, it raises the question of whether the inexorable drive for engagement—a metric prioritized by social media companies, not public health—is a solution to the problem of misinformation or further exacerbates it by not addressing the underlying mechanisms, incentives, and logic by which misinformation spreads. The infodemiology work described here and its impact on reducing the temperature of web-based conversations and avoiding backfire effects raise questions about how much engagement is optimal to improve science communication. More research is needed, of course, to correlate this approach with the effect on participants’ and bystanders’ subsequent beliefs and behaviors.

In previous reports, we examined our intervention protocol from several perspectives. In a study [30], we found a clear tension between using principles of motivational interviewing and the imperative to limit the amount of misinformation that remains unchecked by facts. Separately, we discussed that infodemiologists adopt several informal roles in web-based discussions, serving both as hosts and translators [22]. In this study, we quantitatively evaluated the impact of these interventions, drawing inspiration from implementation science frameworks as a guide, with the intention of understanding to what extent these interventions attract attention from bystanders. Viewed in combination, our qualitative analyses plus this quantitative assessment provide a novel mixed methods approach to evaluating interventions to address web-based antivaccine sentiment specifically and digital misinformation in general. Such approaches provide a more complete picture of the extent to which interventions based on a blend of motivational interviewing principles and evidence-based interventions focusing on bystanders can be useful in counteracting networked misinformation on web-based platforms. While labor-intensive, such interventions can be one part of a comprehensive strategy to address medical misinformation in digital spaces, along with other evidence-based strategies.

Acknowledgments

This study was supported by the Robert Wood Johnson Foundation (grant numbers 76935 and 78084), Weill Cornell Medicine’s JumpStart program, and the Cornell Center for Social Science. The authors acknowledge the helpful insights of Kathleen Hall Jamieson, Oktawia Wojcik, and Nancy Barrand. The opinions expressed in this study do not necessarily reflect those of either the Robert Wood Johnson Foundation or Weill Cornell Medicine.

Conflicts of Interest

None declared.

  1. Simonov A, Sacher SK, Dubé JPH, Biswas S. The persuasive effect of Fox News: non-compliance with social distancing during the Covid-19 pandemic. National Bureau of Economic Research. 2020. URL: https://www.nber.org/papers/w27237 [accessed 2023-10-05]
  2. Hornik R, Kikut A, Jesch E, Woko C, Siegel L, Kim K. Association of COVID-19 misinformation with face mask wearing and social distancing in a nationally representative US sample. Health Commun. 2021;36(1):6-14. [CrossRef] [Medline]
  3. Loomba S, de Figueiredo A, Piatek SJ, de Graaf K, Larson HJ. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat Hum Behav. 2021;5(3):337-348. [FREE Full text] [CrossRef] [Medline]
  4. Pertwee E, Simas C, Larson HJ. An epidemic of uncertainty: rumors, conspiracy theories and vaccine hesitancy. Nat Med. 2022;28(3):456-459. [FREE Full text] [CrossRef] [Medline]
  5. Pierri F, Perry BL, DeVerna MR, Yang KC, Flammini A, Menczer F, et al. Online misinformation is linked to early COVID-19 vaccination hesitancy and refusal. Sci Rep. 2022;12(1):5966. [FREE Full text] [CrossRef] [Medline]
  6. Enders AM, Uscinski JE, Klofstad C, Stoler J. The different forms of COVID-19 misinformation and their consequences. Harvard Kennedy School (HKS) Misinformation Review. 2020:1-21. [FREE Full text] [CrossRef]
  7. Jolley D, Douglas KM. The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS One. 2014;9(2):e89177. [FREE Full text] [CrossRef] [Medline]
  8. Lee SJ, Lee CJ, Hwang H. The impact of COVID-19 misinformation and trust in institutions on preventive behaviors. Health Educ Res. 2023;38(1):95-105. [FREE Full text] [CrossRef] [Medline]
  9. Murthy VH. Confronting Health Misinformation: The U.S. Surgeon General's Advisory on Building a Healthy Information Environment. Washington, D.C. Office of the US Surgeon General; 2021.
  10. Vraga EK, Bode L. Defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation. Political Commun. 2020;37(1):136-144. [CrossRef]
  11. Chou WYS, Gaysynsky A, Cappella JN. Where we go from here: health misinformation on social media. Am J Public Health. 2020;110(S3):S273-S275. [FREE Full text] [CrossRef] [Medline]
  12. Kahan DM. Misconceptions, misinformation, and the logic of identity-protective cognition. SSRN J. 2017 [FREE Full text] [CrossRef]
  13. Gorman GE, Gorman JM. Denying to the Grave: Why We Ignore the Science That Will Save Us, Revised and Updated Edition. New York, NY. Oxford University Press; 2021.
  14. Ecker UKH, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, et al. The psychological drivers of misinformation belief and its resistance to correction. Nat Rev Psychol. 2022;1(1):13-29. [FREE Full text] [CrossRef]
  15. Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council of Europe Report. 2017. URL: https:/​/edoc.​coe.int/​en/​media/​7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.​html [accessed 2023-10-05]
  16. Zarocostas J. How to fight an infodemic. Lancet. 2020;395(10225):676. [FREE Full text] [CrossRef] [Medline]
  17. Purnat TD, Vacca P, Czerniak C, Ball S, Burzo S, Zecchin T, Dubé; et al. Infodemic signal detection during the COVID-19 pandemic: development of a methodology for identifying potential information voids in online conversations. JMIR Infodemiology. 2021;1(1):e30971. [FREE Full text] [CrossRef] [Medline]
  18. Eysenbach G. Infodemiology: the epidemiology of (mis)information. Am J Med. 2002;113(9):763-765. [FREE Full text] [CrossRef] [Medline]
  19. Knudsen J, Perlman-Gabel M, Uccelli IG, Jeavons J, Chokshi DA. Combating misinformation as a core function of public health. NEJM Catal Innov Care Deliv. 2023;4(2) [FREE Full text] [CrossRef]
  20. Chiou H, Voegeli C, Wilhelm E, Kolis J, Brookmeyer K, Prybylski D. The future of infodemic surveillance as public health surveillance. Emerg Infect Dis. 2022;28(13):S121-S128. [FREE Full text] [CrossRef] [Medline]
  21. Simon FM, Camargo CQ. Autopsy of a metaphor: the origins, use and blind spots of the ‘infodemic’. New Media Soc. 2021;25(8):2219-2240. [FREE Full text] [CrossRef]
  22. Gorman JM, Scales DA. Leveraging infodemiologists to counteract online misinformation: experience with COVID-19 vaccines. Harvard Kennedy School (HKS) Misinformation Review. 2022. URL: https:/​/misinforeview.​hks.harvard.edu/​article/​leveraging-infodemiologists-to-counteract-online-misinformation-experience-with-covid-19-vaccines/​ [accessed 2023-10-05]
  23. Seymour B, Gyenes N, Roberts H, Fish SA, Bermejo F, Zuckerman E. Public health, social networks, and the digital media ecosystem: emerging hypotheses. In: Substance Abuse Library and Information Studies. Presented at: Proceedings of the 39th Annual SALIS/AMHL Conference; May 3-6, 2017, 2017; Worcester. URL: https://salis.org/wp-content/uploads/2020/08/SALISJournal-vol4_2017.pdf#page=8
  24. Getman R, Helmi M, Roberts H, Yansane A, Cutler D, Seymour B. Vaccine hesitancy and online information: the influence of digital networks. Health Educ Behav. 2018;45(4):599-606. [CrossRef] [Medline]
  25. DiResta R, Lotan G. Anti-vaxxers are using Twitter to manipulate a vaccine bill. Wired. Jun 08, 2015. URL: https://www.wired.com/2015/06/antivaxxers-influencing-legislation/ [accessed 2023-10-05]
  26. Young DG, Miller JM. Political communication. In: Sears DO, Levy JS, Jerit J, Huddy L, editors. The Oxford Handbook of Political Psychology, 3rd Edition. New York, NY. Oxford University Press; 2023.
  27. Carpiano RM, Callaghan T, DiResta R, Brewer NT, Clinton C, Galvani AP, et al. Confronting the evolution and expansion of anti-vaccine activism in the USA in the COVID-19 era. Lancet. 2023;401(10380):967-970. [FREE Full text] [CrossRef] [Medline]
  28. Miller WR, Rollnick S. Motivational Interviewing: Helping People Change. London. Guilford Press; 2012.
  29. Gagneur A, Battista MC, Boucher FD, Tapiero B, Quach C, De Wals P, et al. Promoting vaccination in maternity wards—motivational interview technique reduces hesitancy and enhances intention to vaccinate, results from a multicentre non-controlled pre- and post-intervention RCT-nested study, Quebec, march 2014 to february 2015. Euro Surveill. 2019;24(36):1800641. [FREE Full text] [CrossRef] [Medline]
  30. Scales D, Gorman JM, DiCaprio P, Hurth L, Radhakrishnan M, Windham S, et al. Community-oriented Motivational Interviewing (MI): a novel framework extending MI to address COVID-19 vaccine misinformation in online social media platforms. Comput Human Behav. 2023;141:107609. [FREE Full text] [CrossRef] [Medline]
  31. How to respond to vocal vaccine deniers in public: best practice guidance. World Health Organization, European Regional Office. 2017. URL: https://www.who.int/europe/publications/i/item/WHO-EURO-2017-2899-42657-59427 [accessed 2023-10-05]
  32. Omari A, Boone KD, Zhou T, Lu PJ, Kriss JL, Hung MC, et al. Characteristics of the moveable middle: opportunities among adults open to COVID-19 vaccination. Am J Prev Med. 2023;64(5):734-741. [FREE Full text] [CrossRef] [Medline]
  33. Hassan A, Barber SJ. The effects of repetition frequency on the illusory truth effect. Cogn Res Princ Implic. 2021;6(1):38. [FREE Full text] [CrossRef] [Medline]
  34. Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Sci Commun. 2017;39(5):621-645. [CrossRef]
  35. Lewandowsky S. Climate change disinformation and how to combat it. Annu Rev Public Health. 2021;42:1-21. [FREE Full text] [CrossRef] [Medline]
  36. Dong L, Bogart LM, Gandhi P, Aboagye JB, Ryan S, Serwanga R, et al. A qualitative study of COVID-19 vaccine intentions and mistrust in Black Americans: recommendations for vaccine dissemination and uptake. PLoS One. 2022;17(5):e0268020. [FREE Full text] [CrossRef] [Medline]
  37. DiRusso C, Stansberry K. Unvaxxed: a cultural study of the online anti-vaccination movement. Qual Health Res. 2022;32(2):317-329. [CrossRef] [Medline]
  38. Gagneur A, Gosselin V, Dubé È. Motivational interviewing: a promising tool to address vaccine hesitancy. Vaccine. 2018;36(44):6553-6555. [CrossRef] [Medline]
  39. Scales D, Gorman J, Leff C, Gorman S. Effective ways to combat online medical and scientific misinformation: a hermeneutic narrative review and analysis. mediArXiv. Preprint posted online on February 16, 2021. 2021 [FREE Full text] [CrossRef]
  40. Townsend L, Wallace C. Social media research: a guide to ethics. University of Aberdeen. 2016. URL: https://www.gla.ac.uk/media/Media_487729_smxx.pdf [accessed 2023-10-05]
  41. Swire-Thompson B, DeGutis J, Lazer D. Searching for the backfire effect: measurement and design considerations. J Appl Res Mem Cogn. 2020;9(3):286-299. [FREE Full text] [CrossRef] [Medline]
  42. Sillence E. Seeking out very like-minded others: exploring trust and advice issues in an online health support group. Int J Web Based Communities. 2010;6(4):376-394. [CrossRef]
  43. Vraga EK, Bode L. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Inf Commun Soc. 2017;21(10):1337-1353. [CrossRef]
  44. Prochaska JO, Johnson S, Lee P. The transtheoretical model of behavior change. In: Shumaker SA, Ockene JK, Riekert KA, editors. The Handbook of Health Behavior Change, 3rd Edition. New York, NY. Springer Publishing Company; 2009;59-83.
  45. Philip JK, Kerr G, Don ES, McColl R, Pals H. The elaboration likelihood model: review, critique and research agenda. Eur J Mark. 2014;48(11/12):2033-2050. [CrossRef]
  46. Reno JE, O'Leary S, Garrett K, Pyrzanowski J, Lockhart S, Campagna E, et al. Improving provider communication about HPV vaccines for vaccine-hesitant parents through the use of motivational interviewing. J Health Commun. 2018;23(4):313-320. [CrossRef] [Medline]
  47. Lemaitre T, Carrier N, Farrands A, Gosselin V, Petit G, Gagneur A. Impact of a vaccination promotion intervention using motivational interview techniques on long-term vaccine coverage: the PromoVac strategy. Hum Vaccin Immunother. 2019;15(3):732-739. [FREE Full text] [CrossRef] [Medline]
  48. Vaccine Persona Explainer: How do we get America vaccinated? Surgo Ventures. 2020. URL: https://surgoventures.org/vaccine-persona-explainer [accessed 2021-12-09]
  49. Chan MPS, Jones CR, Jamieson KH, Albarracín D. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. 2017;28(11):1531-1546. [FREE Full text] [CrossRef] [Medline]
  50. Roozenbeek J, van der Linden S, Goldberg B, Rathje S, Lewandowsky S. Psychological inoculation improves resilience against misinformation on social media. Sci Adv. 2022;8(34):eabo6254. [FREE Full text] [CrossRef] [Medline]
  51. Schmid P, Schwarzer M, Betsch C. Weight-of-evidence strategies to mitigate the influence of messages of science denialism in public discussions. J Cogn. 2020;3(1):36. [FREE Full text] [CrossRef] [Medline]
  52. Cook J, Lewandowsky S, Ecker UKH. Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLoS One. 2017;12(5):e0175799. [FREE Full text] [CrossRef] [Medline]
  53. Traberg CS, Roozenbeek J, van der Linden S. Psychological inoculation against misinformation: current evidence and future directions. Ann Am Acad Pol Soc Sci. 2022;700(1):136-151. [FREE Full text] [CrossRef]
  54. Stanford Internet Observatory. Memes, magnets and microchips: narrative dynamics around COVID-19 vaccines. Virality Project. 2022. URL: https:/​/fsi.​stanford.edu/​publication/​memes-magnets-and-microchips-narrative-dynamics-around-covid-19-vaccines [accessed 2023-10-05]
  55. Stocking G, Matsa KE, Khuzam M. News organizations were most prominent source in COVID-19 content shared in public Facebook spaces. Pew Research Center. 2020. URL: https:/​/www.​pewresearch.org/​journalism/​2020/​06/​24/​news-organizations-were-most-prominent-source-in-covid-19-content-shared-in-public-facebook-spaces/​ [accessed 2023-10-05]
  56. Stocking G, Matsa KE, Khuzam M. As COVID-19 emerged in U.S., Facebook posts about it appeared in a wide range of public pages, groups. Pew Research Center. 2020. URL: https:/​/www.​pewresearch.org/​journalism/​2020/​06/​24/​as-covid-19-emerged-in-u-s-facebook-posts-about-it-appeared-in-a-wide-range-of-public-pages-groups/​ [accessed 2023-10-05]
  57. Chou WYS, Budenz A. Considering emotion in COVID-19 vaccine communication: addressing vaccine hesitancy and fostering vaccine confidence. Health Commun. 2020;35(14):1718-1722. [CrossRef] [Medline]
  58. Penţa MA, Băban A. Message framing in vaccine communication: a systematic review of published literature. Health Commun. 2018;33(3):299-314. [CrossRef] [Medline]
  59. Ilgen JS, Eva KW, de Bruin A, Cook DA, Regehr G. Comfort with uncertainty: reframing our conceptions of how clinicians navigate complex clinical situations. Adv Health Sci Educ Theory Pract. 2019;24(4):797-809. [CrossRef] [Medline]
  60. Sillence E, Briggs P, Harris P, Fishwick L. A framework for understanding trust factors in web-based health advice. Int J Hum Comput Stud. 2006;64(8):697-713. [FREE Full text] [CrossRef]
  61. Walter N, Tukachinsky R. A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun Res. 2019;47(2):155-177. [CrossRef]
  62. Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci U S A. 2021;118(5):e2020043118. [FREE Full text] [CrossRef] [Medline]
  63. Walter N, Murphy ST. How to unring the bell: a meta-analytic approach to correction of misinformation. Commun Monogr. 2018;85(3):423-441. [CrossRef]
  64. Schmid P, Betsch C. Effective strategies for rebutting science denialism in public discussions. Nat Hum Behav. 2019;3(9):931-939. [CrossRef] [Medline]
  65. Kahan DM, Jenkins‐Smith H, Braman D. Cultural cognition of scientific consensus. J Risk Res. 2011;14(2):147-174. [CrossRef]
  66. Pennycook G, Cannon TD, Rand DG. Prior exposure increases perceived accuracy of fake news. J Exp Psychol Gen. 2018;147(12):1865-1880. [FREE Full text] [CrossRef] [Medline]
  67. Ecker UKH, Lewandowsky S, Cheung CSC, Maybery MT. He did it! She did it! No, she did not! Multiple causal explanations and the continued influence of misinformation. J Mem Lang. 2015;85:101-115. [FREE Full text] [CrossRef]
  68. van der Linden SL, Clarke CE, Maibach EW. Highlighting consensus among medical scientists increases public support for vaccines: evidence from a randomized experiment. BMC Public Health. 2015;15:1207. [FREE Full text] [CrossRef] [Medline]
  69. Maibach EW, van der Linden SL. The importance of assessing and communicating scientific consensus. Environ Res Lett. 2016;11(9):091003. [FREE Full text] [CrossRef]
  70. Bolsen T, Druckman JN. Counteracting the politicization of science. J Commun. 2015;65(5):745-769. [CrossRef]
  71. Epstein Z, Berinsky AJ, Cole R, Gully A, Pennycook G, Rand DG. Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online. Harvard Kennedy School Misinformation Review. 2021:1-12. [FREE Full text] [CrossRef]
  72. Pennycook G, Rand DG. Nudging social media toward accuracy. Ann Am Acad Pol Soc Sci. 2022;700(1):152-164. [FREE Full text] [CrossRef] [Medline]
  73. Orpin D. Chapter 9. #Vaccineswork: recontextualizing the content of epidemiology reports on Twitter. In: Pérez-Llantada C, Luzón MJ, editors. Science Communication on the Internet: Old Genres Meet New Genres. Amsterdam, the Netherlands. John Benjamins Publishing Company; 2019;173-194.
  74. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322-1327. [FREE Full text] [CrossRef] [Medline]
  75. Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7:64. [FREE Full text] [CrossRef] [Medline]
  76. Lane-Fall MB, Curran GM, Beidas RS. Scoping implementation science for the beginner: locating yourself on the "subway line" of translational research. BMC Med Res Methodol. 2019;19(1):133. [FREE Full text] [CrossRef] [Medline]
  77. Platt T, Platt J, Thiel DB, Kardia SLR. Facebook advertising across an engagement spectrum: a case example for public health communication. JMIR Public Health Surveill. 2016;2(1):e27. [FREE Full text] [CrossRef] [Medline]
  78. Kite J, Grunseit A, Li V, Vineburg J, Berton N, Bauman A, et al. Generating engagement on the make healthy normal campaign Facebook page: analysis of Facebook analytics. JMIR Public Health Surveill. 2019;5(1):e11132. [FREE Full text] [CrossRef] [Medline]
  79. Benis A, Khodos A, Ran S, Levner E, Ashkenazi S. Social media engagement and influenza vaccination during the COVID-19 pandemic: cross-sectional survey study. J Med Internet Res. 2021;23(3):e25977. [FREE Full text] [CrossRef] [Medline]
  80. Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020;31(7):770-780. [FREE Full text] [CrossRef] [Medline]
  81. Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand DG. Shifting attention to accuracy can reduce misinformation online. Nature. 2021;592(7855):590-595. [FREE Full text] [CrossRef] [Medline]
  82. Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harvard Kennedy School Misinformation Review. 2020. URL: https://misinforeview.hks.harvard.edu/article/pausing-reduce-false-news/ [accessed 2023-10-05]
  83. Waszak PM, Kasprzycka-Waszak W, Kubanek A. The spread of medical fake news in social media—the pilot quantitative study. Health Policy Technol. 2018;7(2):115-118. [FREE Full text] [CrossRef]
  84. Liadeli G, Sotgiu F, Verlegh PWJ. A meta-analysis of the effects of brands‘ owned social media on social media engagement and sales. J Mark. 2022;87(3):406-427. [FREE Full text] [CrossRef]
  85. Williamson P. Take the time and effort to correct misinformation. Nature. 2016;540(7632):171-171. [FREE Full text] [CrossRef]
  86. Allchin D. Ten competencies for the science misinformation crisis. Sc Edu. 2022;107(2):261-274. [CrossRef]
  87. Southwell BG. Social Networks and Popular Understanding of Science and Health: Sharing Disparities. Baltimore, MD. Johns Hopkins University Press; 2013.
  88. Zhang J, Centola D. Social networks and health: new developments in diffusion, online and offline. Annu Rev Sociol. 2019;45(1):91-109. [FREE Full text] [CrossRef]


CDC: Centers for Disease Control and Prevention
IRB: institutional review board
MI: motivational interviewing
RE-AIM: reach, effectiveness, adoption, implementation, and maintenance
WHO: World Health Organization


Edited by T Mackey; submitted 20.06.23; peer-reviewed by B Southwell, E Wilhelm, F Medina; comments to author 10.08.23; revised version received 16.09.23; accepted 30.09.23; published 14.11.23.

Copyright

©David Scales, Lindsay Hurth, Wenna Xi, Sara Gorman, Malavika Radhakrishnan, Savannah Windham, Azubuike Akunne, Julia Florman, Lindsey Leininger, Jack Gorman. Originally published in JMIR Infodemiology (https://infodemiology.jmir.org), 14.11.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Infodemiology, is properly cited. The complete bibliographic information, a link to the original publication on https://infodemiology.jmir.org/, as well as this copyright and license information must be included.