Published on in Vol 2, No 2 (2022): Jul-Dec

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/40198, first published .
Platform Effects on Public Health Communication: A Comparative and National Study of Message Design and Audience Engagement Across Twitter and Facebook

Platform Effects on Public Health Communication: A Comparative and National Study of Message Design and Audience Engagement Across Twitter and Facebook

Platform Effects on Public Health Communication: A Comparative and National Study of Message Design and Audience Engagement Across Twitter and Facebook

Original Paper

1School of Information Sciences, Wayne State University, Detroit, MI, United States

2School of Information, University of South Florida, Tampa, FL, United States

3Department of Radiology, University of Michigan, Ann Arbor, MI, United States

4School of Medicine, Wayne State University, Detroit, MI, United States

Corresponding Author:

Nic DePaula, PhD

School of Information Sciences

Wayne State University

42 W Warren Ave

Detroit, MI, 48202

United States

Phone: 1 313 577 1825

Fax:1 313 577 7563

Email: ndepaula@wayne.edu


Background: Public health agencies widely adopt social media for health and risk communication. Moreover, different platforms have different affordances, which may impact the quality and nature of the messaging and how the public engages with the content. However, these platform effects are not often compared in studies of health and risk communication and not previously for the COVID-19 pandemic.

Objective: This study measures the potential media effects of Twitter and Facebook on public health message design and engagement by comparing message elements and audience engagement in COVID-19–related posts by local, state, and federal public health agencies in the United States during the pandemic, to advance theories of public health messaging on social media and provide recommendations for tailored social media communication strategies.

Methods: We retrieved all COVID-19–related posts from major US federal agencies related to health and infectious disease, all major state public health agencies, and selected local public health departments on Twitter and Facebook. A total of 100,785 posts related to COVID-19, from 179 different accounts of 96 agencies, were retrieved for the entire year of 2020. We adopted a framework of social media message elements to analyze the posts across Facebook and Twitter. For manual content analysis, we subsampled 1677 posts. We calculated the prevalence of various message elements across the platforms and assessed the statistical significance of differences. We also calculated and assessed the association between message elements with normalized measures of shares and likes for both Facebook and Twitter.

Results: Distributions of message elements were largely similar across both sites. However, political figures (P<.001), experts (P=.01), and nonpolitical personalities (P=.01) were significantly more present on Facebook posts compared to Twitter. Infographics (P<.001), surveillance information (P<.001), and certain multimedia elements (eg, hyperlinks, P<.001) were more prevalent on Twitter. In general, Facebook posts received more (normalized) likes (0.19%) and (normalized) shares (0.22%) compared to Twitter likes (0.08%) and shares (0.05%). Elements with greater engagement on Facebook included expressives and collectives, whereas posts related to policy were more engaged with on Twitter. Science information (eg, scientific explanations) comprised 8.5% (73/851) of Facebook and 9.4% (78/826) of Twitter posts. Correctives of misinformation only appeared in 1.2% (11/851) of Facebook and 1.4% (12/826) of Twitter posts.

Conclusions: In general, we find a data and policy orientation for Twitter messages and users and a local and personal orientation for Facebook, although also many similarities across platforms. Message elements that impact engagement are similar across platforms but with some notable distinctions. This study provides novel evidence for differences in COVID-19 public health messaging across social media sites, advancing knowledge of public health communication on social media and recommendations for health and risk communication strategies on these online platforms.

JMIR Infodemiology 2022;2(2):e40198

doi:10.2196/40198

Keywords



Background

Social media have become integral tools for public health messaging and online communication of health and risk information worldwide [1-3]. As of 2021, in the United States, 72% of adults and 84% of those aged 18-29 years say they use at least 1 social media site [4,5] and the sites are widely adopted by public health agencies [3,6,7]. On social media, public health messages can be shared by users, widening message reach. The public may also like and comment on agency messages, and agencies may directly reply to public comments. Although there are opportunities for public health messaging on these sites, there are also challenges. These sites have been sources of misinformation, especially concerning the COVID-19 pandemic [8,9, 10] and antivaccination propaganda [11,12]. The targeted marketing of health-harming products, such as e-cigarettes [13], has also been problematic. Nevertheless, given their prevalence, public health agencies need to understand the dynamics of these sites to better promote health behavior.

There is ample research on social media use by public health agencies [2,7,14,15,16]. However, studies are generally conducted on one site or another, either Facebook or Twitter. Although studies in other domains abound exploring the distinct affordances or characteristics of different social media sites [17-20], there are few studies examining user engagement with public health messages [13,21] and no analyses of the actual messages posted by public health agencies across social media platforms. Despite the lack of such comparative studies, it is important to understand the media effects, or at least the differences across sites. Studies often use the term “social media” broadly when they only investigate a single platform. However, the stark differences across some platforms are now well researched [22-24], and there has been an explicit call for addressing social media affordances in health communication research [25]. This study thus makes a novel contribution to the literature by comparing public health messaging and audience engagement across two of the most popular platforms in public health communication.

Public Health Message Design and Audience Engagement

Research on public health messaging on social media has focused on 2 broad areas: (1) the content and purposes of messages and (2) audience (or user) engagement with the messages. Analyses of message content have focused on “themes,” such as “closures,” “risk factors,” “case updates,” “reassurance,” and others, in various pandemic and crisis contexts [26,27], including the COVID-19 pandemic [7,28]. Analyses of message purposes have discussed the goals of “to inform,” “call to action” [28], increase “self-efficacy” [29], “fight misinformation” [30], and others. However, there is a lack of formalization of message design elements and little consideration for the more objective textual elements of messages, including relevant content, such as the speaker, audience, and types of images in the messages. To address this shortcoming, in this study, we adopted a framework of textual and media message design elements that identify the various objective characteristics of the text—focusing on the content, not on the purpose—which may be useful for multiple health and risk communication scenarios and related research [31].

Audience or user engagement on social media is often formalized in the platform via a Like button, a Share button, and a Comment function, the content or count of which is appended to the message. Facebook also offers other sentiments or reactions to be expressed that are formalized as buttons and counts (ie, love, care, ha-ha, wow, sad, and angry). Although social media reactions to messages may not directly relate to behavioral intent or actual behavior change, analyses of this engagement provide some insight into public interest in and acceptance of the messaging [25] and may therefore help improve message strategies and message design, what others have termed evidence-based science communication [13]. There is a downside to an overreliance on user engagement as the ultimate goal of social media communication, since user engagement is biased toward positive emotional or high arousal content [23,32, 33]. However, these metrics at least provide some evidence of the quality or success of health promotion and information campaigns on these platforms and can be used to increase message reach [13].

Platform Effects on Health and Risk Messages

Although studies in the social media literature recognize the distinct affordances—the functions or action possibility [25]—of these technologies, previous studies on COVID-19 lack a study of message elements across the most popular platforms: Facebook and Twitter [13,25]. Although they are similar, Facebook and Twitter share some key differences. On Facebook, connections of people are bidirectional and termed as “friends.” On Twitter, they are unidirectional; individuals may follow others without being followed by them. This makes Twitter a more public and open platform. However, Facebook is a more popular site, with a marketplace, event calendars, and pages that can be unidirectionally followed [34]. On both Facebook and Twitter, individuals may make posts that include text, hyperlinks, and photos or videos, but the text length of a post is restricted on Twitter to 280 characters. They both have a newsfeed that presents users with posts of their friends, or those followed, the organization of which is determined by the platform algorithms [35].

In practice, Facebook is more widely adopted than Twitter across all demographic groups [34]. Twitter has been used as a “news media” [36] and is associated with political news [37]. Twitter has been found to be more used for public information [38], whereas Facebook is used for “shared identities” [24] and “social interaction” [39] and is associated with higher levels of privacy concern and bonding social capital [22]. A recent study of user engagement with antismoking messages found that the message theme (ie, health/appearance/addiction, money, or family) has no impact on the click-through rate (CTR) of messages, but Facebook had the highest and lowest CTR levels and on average higher CTRs than the same messages on Twitter [13], showing that users on Facebook generally engage more than users on Twitter. However, messages on Twitter had a higher website CTR than those in any other platform, indicating that Twitter users are more likely to go to and scroll through the website linked to in the messages [13]. The literature thus supports the notion of Facebook as more of a social interaction platform, whereas Twitter is more of a news-oriented platform.

Research Objectives and Summary

For this study, we aim to assess differences in public health message design elements and audience engagement with the various message elements across Twitter and Facebook regarding COVID-19 during 1 year of the pandemic. We therefore ask the following research questions (RQs):

  • RQ1. How do public health message design elements differ across Twitter and Facebook?
  • RQ2. How does audience engagement with public health message elements differ across Twitter and Facebook?

In the following sections, we describe the methods of the study, the results, and the discussion in relation to the literature and provide evidence-based policy recommendations for better-targeted health communication strategies.


Data Collection and Sampling

We identified 11 major federal health agencies in the United States associated with infection prevention and control [40], the major public health agency of each of the 50 US states (plus Washington, DC), and the major local public health agency of each of the largest city/county in the 50 states. We then searched for the official account of these agencies on Twitter and Facebook, as well as their own website. Not all of the largest city/county public health agencies of the states had a Facebook or Twitter presence. From the list of agencies identified, we retrieved all COVID-19–related posts generated in 2020. This period enables an analysis of messages from the beginning of the pandemic through several waves. We then searched for any of the following strings anywhere in any of the posts of all identified agencies: ncov, covid, corona, pandemic, or sars-cov. To retrieve these posts, we used the standard Twitter application programming interface (API) and the Facebook API via Crowdtangle [41]. Note that the terms “post” and “message” are used here interchangeably. Unless otherwise specified, the term “post” refers to original posts and not retweets (shared posts) or replies (comments on other posts).

On Twitter, we identified 11 federal accounts (with a total of COVID-19–related original posts and retweets), 48 state accounts (with a total of 40,716 posts and retweets), and 33 local accounts (with a total of 20,164 posts and retweets) that matched the criteria. On Facebook, we identified 10 federal accounts (with a total of 3592 posts), 49 state accounts (with a total of 34,930 posts), and 38 local accounts (with a total of 14,356 posts) that matched the criteria. On Facebook, it is more difficult to differentiate original posts from shared posts; the figures just reported for Facebook include both. This data set of all COVID-19–related posts from all identified agencies in 2020 was called the population data set.

For manual content analysis, we used a stratified random sampling technique where we sampled 900 posts from Twitter and 900 posts from Facebook proportional to the amount of posts made by agency level (ie, local vs state vs federal), the sample data set. The rationale for the sampling was based on similar studies and generating a manageable number of posts to manually code. For example, Reuter et al [13] analyzed a total of 1275 antismoking health messages posted across 3 social media platforms, and Slavik et al [15] used 501 tweets for content analysis of Canadian public health agencies’ messages on Twitter. We should note that for Facebook, our sampling strategy only focused on posts that were shorter than 340 characters (which may include relatively long hyperlinks). This was intended to provide a data set more comparable to Twitter posts, which are restricted to 280 characters (where hyperlinks may be shortened). After removing nonrelated posts, reply posts, and shared posts, or posts without any discernible content, our final sampledata set consisted of a total of 1677 (93.2%) posts (826, 49.3%, original Twitter posts and 851, 50.7%, original Facebook posts) that were coded. For Twitter, this included 82 (9.9%) federal posts, 482 (58.4%) state posts, and 262 (31.7%) local posts. For Facebook, this included 60 (7.1%) federal posts, 560 (65.8%) state posts, and 231 (27.1%) local posts. Multimedia Appendix 1 presents the sampled accounts.

Coding Framework

We adapted an existing framework [31] for the analysis of health and risk communication social media message elements. The framework is based on theories of text analysis [31,42, 43] and social media studies in health and crisis communication [7,15,28,29], including image use in risk communication [44]. These are interdisciplinary studies in the health communication, health informatics, and crisis communication literature. The framework focuses on message elements that are more objective compared to the abstract (eg, “open and transparent message” [45]) and metaphorical (eg, “fighting misinformation” [30]) categories used in the literature—or assuming everything is a “frame” or “theme” [26,27]. Message elements in this framework are composed of textual and media elements. The framework integrates message elements into 8 major dimensions: speech function, topic, threat focus, type of resource, audience, speaker, rhetorical tactic, and media. Each of these dimensions includes more granular message features (or elements). Tables 1 and 2 introduce definitions and examples of the textual and media elements, respectively. The framework is not exhaustive and could be reduced or expanded, as needed. It is conceived for relatively short social media posts, since the analysis focuses on the clause or sentence level, and therefore lengthier documents would be largely more complex to analyze. Further details of the framework and the elements are provided in Multimedia Appendix 2.

Table 1. Definitions and examples of message elements: textual.
Textual elementDefinitionExample
Speech function

RepresentativeClause in declarative form, describing a behavior, state, or event“#COVID19 can be spread by people who do not have symptoms”

DirectiveA sentence that directs, commands, or mandates an action, especially via an imperative sentence“Continue to wear masks” OR “Donate blood.”

QuestionA rhetorical question or question prompt“Are you looking for work? We are hiring!”

ExpressiveExpression of sentiment by the message speaker (eg, sadness, appreciation)“Thank you, #EMS heroes, for staying strong”

RequestRequest to participate in research, volunteer, or means to reach an agency“Call us for questions at this number”
Topic

ProtectionInformation about what to do to prevent or treat the issue“Disinfect things you and your family touch frequently”

PolicyActions, policies, or programs of officials, government agencies, or related entities“Multnomah County is almost ready for reopening schools.”

SurveillanceStatistics or data about prevalence (eg, cases/deaths)“Yesterday, there were 85 new deaths”

ScienceDescribes or explains a cause, mechanism, or symptom of the issue“there is no evidence that produce can transmit #COVID19”

EmergentEvent of emergency concern or immediate priority“Travelers: DON\'T book air travel to NY for just a few days”
Resource type

InteractiveInteractive service, such as question-and-answer (Q&A) with policy makers or watching live“FDA will host a virtual Town Hall on 3D printed swabs”

MaterialTesting sites, financial assistance, vaccine provision“Use our map to find locations for vaccination sites.”

CorrectiveCorrection of a rumor, misinformation, or pointing to related resources“A death previously reported in Warren was incorrect, and has been removed.”
Focus and audience

GroupRefers to a demographic group (eg, adults, Hispanics) or a vulnerable population“Cancer patients are among those at high risk of serious illness from a COVID19 infection.”

SecondaryConsequences of or issues directly related to the main issue“Many are feeling stressed because of #COVID19.”

Other languageMessage or part of message in another language, including sign language“Números del #COVID19 en California:”
Speaker

ExternalExpert or staff from another agency“The head of the CDC will speak…”

PoliticalMayor, governor, or other political figure“Watch the Mayor’s updates on…”

ExpertExpert or staff of the agency“Our own Dr. Elinore will discuss the crisis”

PersonalityNonpolitical or nongovernmental personality, including celebrities or community members“Juan from Blue Eagles football club speaks about COVID19”
Rhetorical

CollectiveFocus on collective terms to characterize an issue or to address it“We all need to do our part to combat Covid-19”

EmphasisSentence with an explanation point or with all capitalized directive“WEAR a mask!”

PositivePositive framing of agency action“We’re making progress is getting vaccines”

MetaphorUsing metaphors to explain the science or prevention of the issue“The swiss cheese respiratory virus defense”
Table 2. Definitions and examples of message elements: media.
Media elementDefinitionExample
HyperlinkA long or short web URLhttps://twitter.com/...
HashtagAny term preceded by a # symbol#COVID-19 #WearAMask
Text-in-imageImage with additional text not included in the text part of the messageSee examples below.
IllustrationIllustration in the image—at least beyond use of a table and colors
PhotographPhotograph of a person, object, or scene
InfographicImage that conveys data or illustrated directives (overrides illustration)
VideoA video embedded in the message

Content Analysis

The content analysis consisted of manual binary coding for the presence or lack of each element in a post. As the definition of the categories became apparent, the nature of some definitions made some categories mutually exclusive, especially within each textual or media dimension. For example, a question is, by definition, not a representative and not an expressive. These coding rules are summarized in Tables 1 and 2 and are further detailed in Multimedia Appendix 2.

A random training sample of 150 posts (75, 50%, from Twitter and 75, 50%, from Facebook) was first retrieved for training and category development. Using these 150 posts, during training, 3 authors updated and defined the message categories. Once this training was accomplished, the 3 authors independently began coding a 20% subsample of the sample data set, where at least 2 coders double-coded the same post to calculate the Cohen κ statistic of interrater reliability (IRR).

After obtaining IRR measures, the coders discussed the results. At this point, the results were not perfect and discrepancies in coding existed and needed to be reconciled. In particular, there were issues with the representative and request speech functions, the external speaker, and some of the rhetorical dimensions. For example, it was not clear whether a slogan on an image, such as “COVID-19 news update,” was to be considered a representative sentence. We ultimately agreed on the definitions as shown in Multimedia Appendix 2, but IRR results were ultimately not perfect for all categories. The κ values are provided in Multimedia Appendix 1. After the IRR analyses, we discussed issues identifying the categories and then better defined and narrowed the rules for final coding of the data. In the cases that discrepancies existed across coders, and categories were revised, we re-examined the data based on the revised definitions and obtained agreement among coders. We then set out to code the remaining data. Each coder independently coded approximately 450 posts, producing a final sample data set of 1677 posts for statistical analysis.

Statistical Analyses

To address our first RQ, we calculated the distribution of each message element on Twitter and Facebook and then compared this total across platforms via an independent 2-sample Z-test of proportions, where the null hypotheses assumed that the proportion of each message element is equal on both platforms. Although Z-tests expect normal distributions, and social media phenomena are notoriously not normally distributed, given the relatively large sample of most message elements, we found it reasonable to apply the Z-tests [44].

To address our second RQ, we operationalized audience engagement as normalized frequencies of likes and shares. Other studies have used the CTR to measure audience engagement [13], seemingly nonnormalized tweet counts [15], and regression models where follower count, and other dimensions, are controlled for [45]. The CTR measure used by Reuter et al [13] was not possible for our study since we could not have access to message clicks or actual message views (the total_views field provided by the Facebook API was not reliable and contained missing data; no such measure was provided by the Twitter API). Our approach is simpler than the regression models, but given the focus on a single issue, the random sampling of data across agencies and time, and normalized measure of likes and shares based on an agency’s follower count, our approach provides a robust and easy-to-interpret method to test the association between message features and audience engagement.

We calculated a measure of normalized likes (NLm) as the number of likes of each message “m,” divided by the follower count of the account that posted the message. NLm is the percentage of the agency’s follower count that liked the message. Although Facebook includes additional positive and negative measures of audience engagement—namely love, care, ha-ha, wow, sad, and angry—these were not included as part of the NLm measure to make it more comparable with the single like feature of Twitter. Although we considered and analyzed the more negative measures of Facebook sentiment, namely sad and angry, these overly complicated the research and ultimately seemed out of scope, since our aim was to compare Facebook and Twitter elements. This study thus focused on only likes and shares on Facebook and Twitter, both of which are types of positive engagement. Generally, in this study, engagement refers to “liking” or “sharing” a message.

Similar to normalized likes, we created a measure of normalized shares (NSm) of each message “m.” The NSm measure, compared to likes, can be more directly considered a diffusion rate [46] or retransmission rate [7] of a message (or message elements), since it is a direct share by the user to its network. Although messages are not only liked or shared by the followers of an account, the size of an account’s followers largely influences the total engagement with posts of that account [47]. Equations of these normalized like and normalized share measures are provided in Multimedia Appendix 3.

For every message element, we then computed the mean NS and mean NL of all messages that contained the element, and of all messages that did not contain it, and compared these 2 groups via a 2-tailed independent-samples Wilcoxon-Mann-Whitney (WMW) test, given the skewness of the data and as similar studies have approached it [15]. We considered and discussed P≤.05 as statistically significant.


Data Set Details

Table 3 shows descriptive statistics for the final sample data set in relation to the population data set of COVID-19–related posts. The list of agency accounts in the sample are in Multimedia Appendix 1. As shown in Table 3, local, state, and federal agencies made a comparable number of Facebook and Twitter posts (these measures do not include shares or retweets). In general, per account, state agencies were more active in posting than local and federal agencies. For example, on Facebook—based on population statistics—state accounts made 712 posts per account (34,930 total posts by 49 accounts), whereas local accounts made 377 posts per account (14,356 total posts by 38 accounts), and federal accounts made 359 posts per account (3592 total posts by 10 accounts). Results were relatively similar for Twitter.

Figure 1 shows the mean and IQR of account followers, separated for local, federal, and agency accounts (based on the sample data set). There were strong variations across local, state, and federal agencies in the distribution of followers and platforms. Not surprisingly, federal agency accounts had the most followers, and state agencies had more followers than local agencies, on average. Federal agencies were more popular (ie, had more followers) on Twitter, whereas state agencies were more popular on Facebook. Local agencies were similarly popular on Facebook and Twitter. Generally, there is great variation in the top quartile of the distribution. Detailed numbers for this box plot can be found in Multimedia Appendix 3.

Table 3. Statisticsa for the sample data set as a percentage of the population of COVID-19 posts in 2020.

Local, n/N (%)State, n/N (%)Federal, n/N (%)All, n/N (%)
Facebook accounts32/38 (84.0)48/49 (98.0)9/10 (90.0)89/97 (92.0)
Twitter accounts29/33 (88.0)45/48 (94.0)9/11 (82.0)83/92 (90.0)
Facebook total posts231/14,356 (1.6)560/34,930 (1.6)60/3592 (1.7)851/52,878 (1.6)
Twitter total posts262/15,421 (1.7)482/27,866 (1.7)82/4620 (1.8)826/47,907 (1.7)

aStatistics are for the final sample data set used in content and statistical analyses in relation to the population data set of all COVID-19–related posts from all accounts identified in 2020.

Figure 1. Box plot of IQRs of followers per account across agency levels and platforms.
View this figure

Platform Effects on Message Design

Table 4 shows the total count of each message element in the coded sample data set as the number of posts in which the element appeared, separately for Facebook and Twitter. Table 4 also provides results from a 2-tailed Z-test that compares whether the proportions are equal across platforms. Results showed that most features are used to a similar extent across platforms. These results provide some validity for the notion that these message features are indeed part of public health and risk communication on social media more broadly. However, we also found some statistically significant differences across the 2 sites. A positive Z-score indicates higher use on Twitter; a negative score indicates higher use on Facebook.

Figure 2 shows the message elements used significantly more or less on Facebook or Twitter, relative to each other, the bars identifying the percentage of posts in which each message element appeared. External, political, and expert actors, along with video, photograph, and other language, were the features more frequently used in Facebook posts compared to Twitter posts. Policy, directive, infographic, surveillance, hyperlink, and hashtag features were used more frequently on Twitter compared to Facebook. Personality and positive framing features were not included in Figure 2 due to the low sample size. However, policy was included in the graph, although at the significance boundary.

Table 4. Message design elements across Facebook (n=851) and Twitter (n=826) posts.
Message elementFacebook, n (%)Twitter, n (%)Z-scoreP value
Speech function

Representative755 (88.7)722 (87.4)–0.83.41

Directive344 (40.4)374 (45.2)2.01.04

Question107 (12.5)96 (11.6)–0.60.55

Expressive79 (9.2)77 (9.3)0.03.98

Request28 (3.2)38 (4.6)1.40.17
Topic

Protection391 (45.9)395 (47.8)0.77.44

Policy292 (34.3)321 (38.8)1.93.05

Surveillance160 (18.8)222 (26.8)3.94<.001

Science73 (8.5)78 (9.4)0.62.53

Emergent39 (4.5)26 (3.1)–1.52.13
Resource type

Interactive192 (22.5)175 (21.1)–0.68.49

Material112 (13.1)112 (13.5)0.24.81

Corrective11 (1.2)12 (1.4)0.28.78
Focus and audience

Group85 (9.9)113 (13.6)2.34.02

Secondary73 (8.5)59 (7.1)–1.09.27

Other language42 (4.9)25 (3.0)–1.99.04
Speaker

External153 (17.9)86 (10.4)–4.43<.001

Political89 (10.4)28 (3.3)–5.68<.001

Expert66 (7.7)39 (4.7)–2.56.01

Personality17 (1.9)5 (0.6)–2.51.01
Rhetorical

Collective123 (14.4)105 (12.7)–1.04.30

Emphasis103 (12.1)81 (9.8)–1.50.13

Positive12 (1.4)23 (2.7)1.97.05

Metaphor5 (0.5)2 (0.2)–1.10.27
Media

Hyperlink485 (56.9)597 (72.2)6.54<.001

Hashtag392 (46.0)613 (74.2)11.76<.001

Text-in-image387 (45.4)343 (41.5)–1.63.10

Illustration235 (27.6)258 (31.2)1.63.10

Photograph196 (23.0)170 (20.5)–1.22.22

Infographic101 (11.8)149 (18.0)3.55<.001

Video130 (15.2)83 (10.0)–3.21<.001
Figure 2. Elements used significantly more on Facebook and significantly more on Twitter.
View this figure

Platform Effects on Audience Engagement

Tables 5 and 6 show audience engagement with messages containing each specific feature compared to those without the feature, calculated separately for Facebook and Twitter as normalized likes and normalized shares. In general, Facebook had higher engagement of users compared to Twitter. In addition, Facebook users used shares more frequently than likes, while Twitter users liked more frequently than they shared. Facebook posts, on average, were liked by 0.19% of account followers, whereas on Twitter, on average, posts were liked by 0.08% of account followers, a difference of 2.25 times higher for Facebook likes. Regarding sharing, Facebook posts, on average, were shared by 0.22% of account followers, whereas on Twitter, on average, posts were shared by 0.05% of account followers, which is more than a 4.4 times difference. However, these engagement measures do not include other forms of engagement on Facebook (eg, love, care), as previously discussed under Methods.

Table 5 provides the mean normalized likes of all messages with the feature compared to those without it, along with P values for the WMW test comparing these 2 sets. For example, in the Facebook sample, on average, 0.16% of the (count of the) account’s followers liked the message that contained a representative, whereas 0.26% liked the messages that did not contain a representative. Therefore, on Facebook, messages that did not contain a representative were liked more than messages that did. However, this was not a statistically significant difference (P=.22). On Twitter, however, on average, 0.08% of the account’s followers liked messages that contained a representative and 0.05% liked messages that did not contain it, which was a significant difference (P<.001).

Table 6 provides the mean normalized shares of all messages with the feature and those without it, along with P values from the WMW test comparing differences between them. Results here can be similarly interpreted as the results in Table 5.

Figure 3 shows the message elements from Tables 5 and 6 that had a significant association with an increase or decrease in normalized likes and shares. Figure 3 shows the percentage points in the increase/decrease associated with the inclusion of the message element. Expressives and the use of a collective frame in messages were associated with more likes across both platforms. Surveillance information as well as infographics were also associated with more likes and shares across Facebook and Twitter. References to material resources, surprisingly, were generally associated with fewer likes and shares on both platforms. We speculate this may be due to the repeated posts about testing and vaccine sites coded under material. Although political figures were more present on Facebook compared to Twitter, they were associated with less engagement on both platforms, especially Facebook. Requests were particularly popular on Facebook but not significant on Twitter. Correctives and policy information were associated with higher engagement on Twitter but less so or not significantly on Facebook.

Table 5. Mean percentage of account followers that liked messages with and without specific elements.
Message elementFacebookTwitter

With featureWithout featureP valueaWith featureWithout featureP valuea
Speech function

Representative0.160.26.220.080.05<.001

Directive0.200.15.010.070.09<.001

Question0.260.16.040.050.08<.001

Expressive0.280.16<.0010.100.08<.001

Request0.520.16.050.060.08.32
Topic

Protection0.180.17.430.080.08.02

Policy0.190.17.030.090.07.20

Surveillance0.130.18.020.120.07<.001

Science0.140.18.410.050.08.08

Emergent0.140.17.260.250.07.06
Resource type

Interactive0.170.17.200.070.08.04

Material0.050.19<.0010.050.08<.001

Corrective0.180.17.490.410.07.03
Focus and audience

Group0.160.17<.0010.040.09<.001

Secondary0.130.18.130.060.08.01

Other language0.100.18.070.020.08<.001
Speaker

External0.130.18.070.060.08.13

Political0.120.18.010.060.08.08

Expert0.170.17.060.060.08.42

Personality0.220.17.010.060.08.30
Rhetorical

Collective0.270.16<.0010.100.08.004

Emphasis0.290.16.0040.080.08.10

Positive0.410.17.120.100.08.43

Metaphor0.410.17.090.020.08.26
Media

Hyperlink0.150.20<.0010.070.10<.001

Hashtag0.190.16.320.070.10.01

Text-in-image0.170.17.010.090.07.002

Illustration0.100.20.030.060.09.12

Photograph0.210.16.080.070.08<.001

Infographic0.200.17<.0010.120.07<.001

Video0.210.17.070.070.08.09

aP values refer to the Wilcoxon-Mann-Whitney test of comparing the mean normalized likes for posts containing the feature with those not containing it, separately for Facebook and Twitter.

Table 6. Mean percentage of account followers that shared messages with and without specific features.
Message elementFacebookTwitter

With featureWithout featureP valueaWith featureWithout featureP valuea
Speech function

Representative0.200.16.010.060.03<.001

Directive0.170.21.270.050.06<.001

Question0.220.19.100.040.06.003

Expressive0.290.19.060.070.05.06

Request0.530.18.180.050.06.39
Topic

Protection0.160.23.030.050.06.002

Policy0.160.21.0040.060.05.04

Surveillance0.250.18<.0010.090.04<.001

Science0.150.20.360.040.06.05

Emergent0.280.19.040.120.05.03
Resource type

Interactive0.300.17.340.050.06.35

Material0.050.22<.0010.040.06.45

Corrective0.180.20.290.180.05.19
Focus and audience

Group0.120.20.0010.030.06<.001

Secondary0.330.18.190.040.06.01

Other language0.090.20.160.020.06.001
Speaker

External0.100.22.040.050.06.18

Political0.070.21<.0010.020.06.01

Expert0.070.21.260.030.06.02

Personality0.080.20.310.030.06.21
Rhetorical

Collective0.200.19.070.070.05.28

Emphasis0.440.16.040.060.05.31

Positive0.220.20.340.050.06.40

Metaphor1.630.19.270.010.06.14
Media

Hyperlink0.140.26.0030.050.06.08

Hashtag0.260.14.370.050.06.04

Text-in-image0.240.16<.0010.070.05<.001

Illustration0.150.21.400.050.06.13

Photograph0.110.22<.0010.040.06<.001

Infographic0.260.19<.0010.090.05<.001

Video0.100.21.010.040.06.01

aP values refer to the Wilcoxon-Mann-Whitney test of comparing the mean normalized shares for posts containing the feature with those not containing it, separately for Facebook and Twitter.

Figure 3. Significant changes in likes and shares associated with the inclusion of message element. The blue bars refer to increases and the red bars to decreases in mean normalized likes and mean normalized shares associated with the inclusion of the message element.
View this figure

Principal Findings

This study analyzed 1677 COVID-19–related posts on Facebook and Twitter, by public health agencies across the United States in 2020, and found differences and similarities in the overall use and popularity of these sites in terms of message design elements and audience engagement. Our results show that Facebook posts received 2.25 times more likes and 4.4 times more shares, in general, than posts on Twitter. However, within each platform, messages received more shares than likes within Facebook—as a percentage of account followers that liked or shared the message—whereas on Twitter, measures were more liked than shared.

Our results show that messages on Twitter, compared to Facebook, are significantly more focused on surveillance information (eg, data and statistics about the threat), policy information, infographics, and hyperlinks. Moreover, federal agencies are more active and more popular on Twitter compared to Facebook, whereas local and state agencies are more active or more popular on Facebook. We also observe that messages on Facebook, compared to Twitter, have significantly more references to political figures, public health experts, and (nonpolitical) personalities (eg, personal stories or local celebrities) as speakers in the messages. From this, we may conclude a type of data and policy orientation for Twitter and a local and personal orientation for Facebook.

We observed that data (eg, infographics, surveillance data) and policy information had significant positive associations with audience engagement on Twitter but not at all or not as much on Facebook, further suggesting this data and policy characterization for Twitter. Although Facebook was the platform where political figures and health experts were more highlighted as speakers in the messages, this personalization was generally not associated with higher engagement on both sites. However, we observe that photographs, which are often of people, and rhetorical elements, such as a collective framing (eg, “we are in this together”), positive framing (eg, “we are trying our best”), and emphasis (eg, exclamation points), which may trigger sentiment and personal connection, received more or significantly more audience engagement on Facebook but not as much or not at all on Twitter. This further suggests the local and personal orientation for Facebook.

The distribution of message design elements is largely similar across both platforms, suggesting consistency in public health messaging, but with some significant differences between the 2 social media sites studied. Results also show significant associations between message elements and audience engagement, with some expected and surprising differences across platforms. In general—for this health and risk communication scenario—we may thus suggest that Twitter has more of a data and policy orientation, whereas Facebook has more of a local and personal orientation on the content, which largely follows the literature on social media affordances.

Integration With Existing Literature

Previous studies have examined the characteristics of Facebook in relation to Twitter as 2 of the major social media sites in the United States and in the world today. Generally, studies support the notion that Twitter is more of a “news media” [22,36] for “information dissemination” [38] and for being “quickly informed” [39], while Facebook is more for “shared identities,” “photographs” [24], and “social interaction” [39], being associated more with bonding social capital [22]. This distinction between Twitter and Facebook is usually explained as the specific affordances of each site [13,25], which may be related to some of its technical features, such as the more open unidirectional networks of Twitter compared to the bidirectional networks of Facebook [38]. Studies also suggest that certain technical features of a site (eg, focus on visual imagery) may lead to an overall higher audience engagement [13,22].

In this study, we did not analyze whether certain platform features caused the use of specific message elements or whether certain message features caused more or less engagement. However, our results generally support the existing literature that suggests that Facebook, while bigger and more popular across the US adult population, has more of a local and personal orientation, associated with close social interactions. Twitter, in contrast, is both a more active and a popular site for federal agencies, compared to local and state agencies, and both the content and engagement on Twitter point to more of a data and policy orientation. Ultimately, we observe great similarities in message elements and audience engagement across Facebook and Twitter, suggesting a standardization of social media policies and practices across agencies and platforms, and also similarities in user engagement on both Facebook and Twitter.

Contributions to Health Communication Policy

This study provides some evidence for policy recommendations on social media health communication strategies. These recommendations are based on the results of this study, which is focused on COVID-19 communication during the beginning and multiple waves of the pandemic in 2020. Public health agencies and further research need to assess whether these are valid for broader contexts as well.

Recommendation 1

For public health agencies using Facebook, we recommend caution when using political figures and external experts on their messages and instead highlight nonpolitical or nongovernment personalities, such as local celebrities or ordinary individuals who have a special story to tell. We also see an opportunity for greater or at least continued use of emotional expressions on messages and the use of collective frames to generate greater positive engagement.

Our results show that messages on Facebook, compared to Twitter, are significantly more focused on highlighting political figures, as well as internal and external experts. However, political figures and external experts were generally associated with less engagement on Facebook. Personalities, including celebrities or ordinary people (eg, an authentic post of a child from the community), were significantly associated with greater engagement on Facebook but were present in few posts (2%) on Facebook. Ultimately, the use of expressives (ie, expressing emotions) and collective frames (eg, using collective pronouns and focusing on collective issues) were particularly well engaged with on Facebook.

Recommendation 2

For public health agencies using Twitter, we recommend caution on the use of hyperlinks and hashtags on Twitter messages if the goal is to increase message likes and overall message diffusion, but continued use of surveillance information and infographics is recommended. Moreover, we recommend a greater focus on messages containing emergent issues (eg, emergency or timely information), and the use of correctives to address misinformation, because these were both not prevalent but were associated with greater positive engagement.

Our results show that messages on Twitter, compared to Facebook, are significantly more focused on policy and surveillance information and include significantly more hyperlinks and hashtags compared to messages on Facebook. Since the hashtag is a textual construction first popularized on Twitter, this is not surprising. However, both hashtags and hyperlinks were generally associated with less engagement on Twitter. Surveillance information and infographics, however, were generally associated with greater engagement on Twitter. Emergent issues, and correctives, were particularly well engaged with on Twitter. However, correctives were included in a minority of tweets (1.4%). Given that social media is part of a misinformation crisis [48], or infodemic [49,50], it is important to consider how public health agencies are addressing misinformation on these environments.

Recommendation 3

For public health agencies using both platforms, we recommend careful use of images in their messages, including photographs, illustrations, and videos, as these were all media types associated with less engagement across both platforms. However, including text-in-image is a reasonable recommendation, since these were associated with greater engagement across platforms.

In general, our results show that not all types of images are similarly engaged with. On both platforms, photographs were significantly associated with fewer shares, whereas infographics were generally associated with greater shares and likes. Although illustrations were associated with fewer likes and shares on both platforms, this negative impact was only significant for Facebook likes. Infographics about the pandemic were associated with higher engagement on both platforms, but they were also largely prevalent. Therefore, the amount of use of these features in this context is likely sufficient. Lastly, text-in-image was generally associated with greater likes and shares on Twitter and greater sharing on Facebook, highlighting the importance of textual and semantic content along with visual content.

Limitations and Future Work

This study intended to show how public health agencies construct their messages across Facebook and Twitter and how users respond to these messages similarly or differently across platforms. To control for aspects of the message topic, we only focused on COVID-19–related messages. COVID-19 is also a major health and risk issue and one that we could expect public health agencies in the country to be communicating about in 2020. However, the focus on COVID-19 puts a limitation on the extent to which we can generalize the findings to health and risk communication more broadly. Moreover, the statistical tests used could be improved with a regression model that assesses and controls for other variables on audience engagement. Nevertheless, our random sampling technique, over multiple kinds of agencies and an entire year, helps us generalize and have confidence in the results.

Health communicators should consider that social media algorithms themselves are problematic as they lead to echo chamber effects [35] and are biased toward hyperactive users [51]. Audience engagement on social media itself should thus be considered with care. The literature generally points to social media engagement as being driven by high emotional content [52], out-group animosity [53], and fear-arousing sensationalism [54]. Simply acquiring more engagement is thus not always appropriate for health and risk communicators. Moreover, there is a chance that social media in government may be used for political purposes [55,56]. Future studies may thus advance this work by examining the quality of engagement across platforms, political issues in public health communication, and examining the nature of the comments to public health messages.

There were few posts with personalities featured on Facebook (17/851, 1.9%) and Twitter (5/826, 0.6%) posts. We could thus not properly assess the impact of this message element on engagement. However, celebrities and personal stories can positively influence health behavior and may be further studied in this context [54,57]. In addition, analyses of fear appeals, distinctions between more or less informative (or scientific) messages, or the use of storytelling, could have improved this study. Some message features need better definition to increase reliability, including representatives and requests. The category of representatives and its results here should be considered with caution, since it is the broadest category of the framework and had a low κ. In all, future research may gain from refining the framework categories, further examining the use of celebrities or personal stories, and the relationship between fear-appeals or other rhetorical strategies on different levels and qualities of user engagement.

Conclusion

In general, we find a data and policy orientation for Twitter messages and users and a local and personal orientation for Facebook, although also many similarities across both platforms. Message elements that impact engagement are similar across both platforms but with some notable distinctions. This study provides novel evidence for differences in COVID-19 public health messaging on social media, advancing health communication research and recommendations for health and risk communication strategies.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Sampled accounts and κ values.

DOCX File , 38 KB

Multimedia Appendix 2

Detailed framework description and coding rules.

DOCX File , 562 KB

Multimedia Appendix 3

Sample statistics and analyses.

DOCX File , 21 KB

  1. Centers for Disease Control and Prevention (CDC). Crisis and Emergency Risk Communication (CERC). 2018.   URL: https://emergency.cdc.gov/cerc/ [accessed 2022-11-09]
  2. Thackeray R, Neiger BL, Burton SH, Thackeray CR. Analysis of the purpose of state health departments' tweets: information sharing, engagement, and action. J Med Internet Res 2013 Nov 11;15(11):e255 [FREE Full text] [CrossRef] [Medline]
  3. Hou Q, Zhao Y, Su X, Rong R, Situ S, Cui Y. Using Sina-Weibo microblogs to inform the development and dissemination of health awareness material about Zika virus transmission, China, 2016-17. PLoS One 2022 Jan 27;17(1):e0261602 [FREE Full text] [CrossRef] [Medline]
  4. Shearer E, Matsa K. News Use Across Social Media Platforms 2018. 2018.   URL: https://www.pewresearch.org/journalism/2018/09/10/news-use-across-social-media-platforms-2018/ [accessed 2022-11-10]
  5. Hughes A, Wojcik S. 10 Facts about Americans and Twitter. 2022.   URL: https://www.pewresearch.org/fact-tank/2019/08/02/10-facts-about-americans-and-twitter/ [accessed 2022-11-10]
  6. Alonso-Cañadas J, Galán-Valdivieso F, Saraite-Sariene L, Caba-Pérez C. Committed to health: key factors to improve users' online engagement through Facebook. Int J Environ Res Public Health 2020 Mar 11;17(6):1814 [FREE Full text] [CrossRef] [Medline]
  7. Sutton J, Renshaw SL, Butts CT. COVID-19: Retransmission of official communications in an emerging pandemic. PLoS One 2020;15(9):e0238491 [FREE Full text] [CrossRef] [Medline]
  8. Tasnim S, Hossain MM, Mazumder H. Impact of rumors and misinformation on COVID-19 in social media. J Prev Med Public Health 2020 May;53(3):171-174 [FREE Full text] [CrossRef] [Medline]
  9. Ahmed W, Vidal-Alaball J, Downing J, López Seguí F. COVID-19 and the 5G conspiracy theory: social network analysis of Twitter data. J Med Internet Res 2020 May 06;22(5):e19458 [FREE Full text] [CrossRef] [Medline]
  10. León B, Martínez-Costa MP, Salaverría R, López-Goñi I. Health and science-related disinformation on COVID-19: a content analysis of hoaxes identified by fact-checkers in Spain. PLoS One 2022;17(4):e0265995 [FREE Full text] [CrossRef] [Medline]
  11. Jenkins MC, Moreno MA. Vaccination discussion among parents on social media: a content analysis of comments on parenting blogs. J Health Commun 2020 Mar 03;25(3):232-242. [CrossRef] [Medline]
  12. Hernandez RG, Hagen L, Walker K, O'Leary H, Lengacher C. The COVID-19 vaccine social media infodemic: healthcare providers' missed dose in addressing misinformation and vaccine hesitancy. Hum Vaccin Immunother 2021 Sep 02;17(9):2962-2964 [FREE Full text] [CrossRef] [Medline]
  13. Reuter K, Wilson ML, Moran M, Le N, Angyan P, Majmundar A, et al. General audience engagement with antismoking public health messages across multiple social media sites: comparative analysis. JMIR Public Health Surveill 2021 Feb 19;7(2):e24429. [CrossRef]
  14. Kothari A, Foisey L, Donelle L, Bauer M. How do Canadian public health agencies respond to the COVID-19 emergency using social media: a protocol for a case study using content and sentiment analysis. BMJ Open 2021 Apr 22;11(4):e041818 [FREE Full text] [CrossRef] [Medline]
  15. Slavik CE, Buttle C, Sturrock SL, Darlington JC, Yiannakoulias N. Examining tweet content and engagement of Canadian public health agencies and decision makers during COVID-19: mixed methods analysis. J Med Internet Res 2021 Mar 11;23(3):e24883 [FREE Full text] [CrossRef] [Medline]
  16. Sell TK, Hosangadi D, Trotochaud M. Misinformation and the US Ebola communication crisis: analyzing the veracity and content of social media messages related to a fear-inducing infectious disease outbreak. BMC Public Health 2020 May 07;20(1):550 [FREE Full text] [CrossRef] [Medline]
  17. Yang K, Pierri F, Hui P, Axelrod D, Torres-Lugo C, Bryden J, et al. The COVID-19 infodemic: Twitter versus Facebook. Big Data Soc 2021 May 05;8(1):205395172110138. [CrossRef]
  18. Oz M, Zheng P, Chen G. Twitter versus Facebook: comparing incivility, impoliteness, and deliberative attributes. In: New Media & Society. Thousand Oaks, CA: Sage; Dec 31, 2017:3400-3419.
  19. Halpern D, Valenzuela S, Katz J. We face, I tweet: how different social media influence political participation through collective and internal efficacy. J Comput-Mediat Commun 2017;22(6):336. [CrossRef]
  20. Chu K, Sidhu AK, Valente TW. Electronic cigarette marketing online: a multi-site, multi-product comparison. JMIR Public Health Surveill 2015;1(2):e11 [FREE Full text] [CrossRef] [Medline]
  21. Eriksson M, Olsson E. Facebook and Twitter in crisis communication: a comparative study of crisis communication professionals and citizens. J Contingencies Crisis Manag 2016 Jun 13;24(4):198-208. [CrossRef]
  22. Shane-Simpson C, Manago A, Gaggi N, Gillespie-Lynch K. Why do college students prefer Facebook, Twitter, or Instagram? Site affordances, tensions between privacy and self-expression, and implications for social capital. Comput Hum Behav 2018 Sep;86:276-288. [CrossRef]
  23. Nikolinakou A, King KW. Viral video ads: emotional triggers and social media virality. Psychol Mark 2018 Jul 24;35(10):715-726. [CrossRef]
  24. Spiliotopoulos T, Oakley I. An exploration of motives and behavior across Facebook and Twitter. J Syst Inf Technol 2020 Jul 13;22(2):201-222. [CrossRef]
  25. Moreno MA, D'Angelo J. Social media intervention design: applying an affordances framework. J Med Internet Res 2019 Mar 26;21(3):e11014 [FREE Full text] [CrossRef] [Medline]
  26. Liu B, Kim S. How organizations framed the 2009 H1N1 pandemic via social and traditional media: implications for U.S. health communicators. Public Relat Rev 2011 Sep;37(3):233-244. [CrossRef]
  27. Lwin M, Lu J, Sheldenkar A, Schulz P. Strategic uses of Facebook in zika outbreak communication: implications for the crisis and emergency risk communication model. Int J Environ Res Public Health 2018 Sep 10;15(9):1974 [FREE Full text] [CrossRef] [Medline]
  28. Wang Y, Hao H, Platt LS. Examining risk and crisis communications of government agencies and stakeholders during early-stages of COVID-19 on Twitter. Comput Human Behav 2021 Jan;114:106568 [FREE Full text] [CrossRef] [Medline]
  29. Vos SC, Sutton J, Yu Y, Renshaw SL, Olson MK, Gibson CB, et al. Retweeting risk communication: the role of threat and efficacy. Risk Anal 2018 Dec 06;38(12):2580-2598. [CrossRef] [Medline]
  30. Wahbeh A, Nasralah T, Al-Ramahi M, El-Gayar O. Mining physicians' opinions on social media to obtain insights into COVID-19: mixed methods analysis. JMIR Public Health Surveill 2020 Jun 18;6(2):e19276 [FREE Full text] [CrossRef] [Medline]
  31. DePaula N, Hagen L, Roytman S, Dyson D, Alnahass D, Patel M, et al. A framework of social media messages for crisis and risk communication: a study of the covid-19 pandemic. 2022 Presented at: 55th Hawaii International Conference on System Sciences (HICSS); 2022; Virtual. [CrossRef]
  32. Leppert K, Saliterer I, Korać S. The role of emotions for citizen engagement via social media – a study of police departments using Twitter. Gov Inf Q 2022 Jul;39(3):101686. [CrossRef]
  33. DePaula N, Dincelli E. Information strategies and affective reactions: how citizens interact with government social media content. FM 2018 Apr 01;23(4). [CrossRef]
  34. Pew Research Center. Social Media Fact Sheet. 2021.   URL: https://www.pewresearch.org/internet/fact-sheet/social-media/ [accessed 2022-11-10]
  35. Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M. The echo chamber effect on social media. Proc Natl Acad Sci U S A 2021 Mar 02;118(9):e2023301118 [FREE Full text] [CrossRef] [Medline]
  36. Kwak H, Lee C, Park H, Moon S. What is Twitter, a social network or a news media? 2010 Presented at: 19th International Conference on World Wide Web (WWW 2010); April 26-30, 2010; Raleigh, NC. [CrossRef]
  37. Shapiro M, Hemphill L. Politicians and the policy agenda: does use of twitter by the U.S. Congress direct New York Times content? Policy Internet 2017;9(1):109-132. [CrossRef]
  38. Lin H, Qiu L. Two sites, two voices: linguistic differences between Facebook status updates and tweets. In: Rau PLP, editor. Cross-Cultural Design. Cultural Differences in Everyday Life. CCD 2013. Lecture Notes in Computer Science, vol 8024. Berlin, Heidelberg: Springer; 2013:432-440.
  39. Voorveld HAM, van Noort G, Muntinga DG, Bronner F. Engagement with social media and social media advertising: the differentiating role of platform type. J Advert 2018 Feb 13;47(1):38-54. [CrossRef]
  40. Association for Professionals in Infection Control and Epidemiology. US Government Health Agencies: With Functions Related to Infection Prevention and Control. 2008.   URL: https://apic.org/Resource_/TinyMceFileManager/Advocacy-PDFs/outline_of_govt_health_agencies.pdf [accessed 2022-11-10]
  41. Miles C. About Us: Learn More about CrowdTangle.   URL: http://help.crowdtangle.com/en/articles/4201940-about-us [accessed 2022-11-10]
  42. Fairclough N. Analysing Discourse: Textual Analysis for Social Research. London, UK: Routledge; 2003.
  43. van DT. Discourse and knowledge: A Sociocognitive Approach. Cambridge, UK: Cambridge University Press; 2014.
  44. Vos SC, Cohen E. Using pictures in health and risk messages Internet. Oxf Res Encycl Commun 2017 Aug 22:Online. [CrossRef]
  45. Vos SC, Sutton J, Gibson CB, Butts CT. #Ebola: emergency risk messages on social media. Health Secur 2020 Dec 01;18(6):461-472. [CrossRef] [Medline]
  46. Howell DC. Statistical Methods for Psychology. Boston, MA: Wadsworth Cengage Learning; 1994.
  47. Renshaw S, Mai S, Dubois E, Sutton J, Butts C. Cutting through the noise: predictors of successful online message retransmission in the first 8 months of the COVID-19 pandemic. Health Secur 2021;19(1):31-43 [FREE Full text] [CrossRef] [Medline]
  48. DePaula N, Fietkiewicz KJ, Kaja J, Froehlich TJ, Million AJ, Dorsch I, et al. Challenges for Social Media: Misinformation, Free Speech, Civic Engagement, and Data Regulations. Proceedings of the Association for Information Science and Technology 2018;55(1):665-668. [CrossRef]
  49. Fagerland MW. t-tests, non-parametric tests, and large studies--a paradox of statistical practice? BMC Med Res Methodol 2012 Jun 14;12(1):78 [FREE Full text] [CrossRef] [Medline]
  50. Park S, Han S, Kim J, Molaie MM, Vu HD, Singh K, et al. COVID-19 discourse on Twitter in four Asian countries: case study of risk communication. J Med Internet Res 2021 Mar 16;23(3):e23272. [CrossRef]
  51. Papakyriakopoulos O, Serrano JCM, Hegelich S. Political communication on social media: a tale of hyperactive users and bias in recommender systems. Online Soc Netw Media 2020 Jan;15:100058. [CrossRef]
  52. Keib K, Espina C, Lee Y, Wojdynski BW, Choi D, Bang H. Picture this: the influence of emotionally valenced images, on attention, selection, and sharing of social media news. Media Psychol 2017 Oct 05;21(2):202-221. [CrossRef]
  53. Rathje S, Van Bavel JJ, van der Linden S. Out-group animosity drives engagement on social media. Proc Natl Acad Sci U S A 2021 Jun 29;118(26):e2024292118 [FREE Full text] [CrossRef] [Medline]
  54. Ali K, Zain-ul-abdin K, Li C, Johns L, Ali AA, Carcioppolo N. Viruses going viral: impact of fear-arousing sensationalist social media messages on user engagement. Sci Commun 2019 May 03;41(3):314-338. [CrossRef]
  55. DePaula N. Supporting the Cause: An Analysis of How Government Agencies Use Twitter Hashtags. Proceedings of the Association for Information Science and Technology 2018;55(5):788-789 [FREE Full text] [CrossRef]
  56. DePaula N. Political Ideology and Information Technology in Government Online Communication.. Government Information Quarterly 2022 Aug 20:e101747. [CrossRef]
  57. Sillence E, Martin R. Talking about decisions: the facilitating effect of a celebrity health announcement on the communication of online personal experiences around decision-making. Health Commun 2020 Nov;35(12):1447-1454. [CrossRef] [Medline]


API: application programming interface
CTR: click-through rate
IRR: interrater reliability
RQ: research question
WMW: Wilcoxon-Mann-Whitney


Edited by T Purnat; submitted 09.06.22; peer-reviewed by C Baur, JB Del Rosario; comments to author 28.07.22; revised version received 27.08.22; accepted 08.09.22; published 20.12.22

Copyright

©Nic DePaula, Loni Hagen, Stiven Roytman, Dana Alnahass. Originally published in JMIR Infodemiology (https://infodemiology.jmir.org), 20.12.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Infodemiology, is properly cited. The complete bibliographic information, a link to the original publication on https://infodemiology.jmir.org/, as well as this copyright and license information must be included.