Published on in Vol 5 (2025)

This is a member publication of City University of Hong Kong

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/57951, first published .
The Role of Influencers and Echo Chambers in the Diffusion of Vaccine Misinformation: Opinion Mining in a Taiwanese Online Community

The Role of Influencers and Echo Chambers in the Diffusion of Vaccine Misinformation: Opinion Mining in a Taiwanese Online Community

The Role of Influencers and Echo Chambers in the Diffusion of Vaccine Misinformation: Opinion Mining in a Taiwanese Online Community

1Li Ka Shing Faculty of Medicine, School of Public Health, University of Hong Kong, Hong Kong, China

2College of Public Health, Institute of Health Behaviors and Community Sciences, National Taiwan University, Taiwan, Taiwan

3Department of Media and Communication, City University of Hong Kong, Room 5086, Creative Media Center, Hong Kong, China

Corresponding Author:

Fen Lin, PhD


Background: Prevalence and spread of misinformation are a concern for the exacerbation of vaccine hesitancy and a resulting reduction in vaccine intent. However, few studies have focused on how vaccine misinformation diffuses online, who is responsible for the diffusion, and the mechanisms by which that happens. In addition, researchers have rarely investigated this in non-Western contexts particularly vulnerable to misinformation.

Objective: This study aims to identify COVID-19 vaccine misinformation, map its diffusion, and identify the effect of echo chamber users on misinformation diffusion on a Taiwanese online forum.

Methods: The study uses data from a popular forum in Taiwan called PTT. A crawler scraped all threads on the most popular subforum from January 2021 until December 2022. Vaccine-related threads were identified through keyword searching (n=5818). Types of misinformation, including misleading, disinformation, conspiracy, propaganda, and fabricated content, were coded by 2 researchers. Polarity was proposed as a proxy for measuring an individual’s level of involvement in the echo chamber, one of the mechanisms responsible for the viral misinformation on social media. Factors related to information diffusion, including misinformation type and polarity, were then assessed with negative binomial regression.

Results: Of 5818 threads, 3830 (65.8%) were identified as true information, and 1601 (27.5%) contained misinformation, yielding 5431 boards for analysis. Misinformation content did not vary much from other contexts. Propaganda-related information was most likely to be reposted (relative risk: 2.07; P<.001) when comparing to true information. However, the more polarized a user was, the less likely his or her content was to be reposted (relative risk: 0.22; P<.001). By removing the nodes with a high level of indegree, outdegree, and betweenness centrality, we found that the core network and the entire network demonstrated a decreasing trend in average polarity score, which showed that influential users contributed to the polarization in misinformation consumption.

Conclusions: Although the forum exhibits a resilience to echo chambering, active users and brokers contribute significantly to the polarization of the community, particularly through propaganda-style misinformation. This popularity of propaganda-style misinformation may be linked to the political nature of the forum, where public opinion follows “elite cues” on issues, as observed in the United States. The work in this study corroborates this finding and contributes a data point in a non-Western context. To manage the echo chambering of misinformation, more effort can be put into moderating these users to prevent polarization and the spread of misinformation to prevent growing vaccine hesitancy.

JMIR Infodemiology 2025;5:e57951

doi:10.2196/57951

Keywords



Background

As individuals increasingly turn to social media as a primary source of information, the prevalence and spread of unverified or misinformed scientific claims is concerning [1]. Social media users gravitate toward information that validates their belief systems, forming echo chambers that validate their shared narrative [2]. These echo chambers can be the launching pads for misinformation that goes viral [3]. Worse, they can influence opinions on issues of public concern, such as vaccination [4].

As early as 2001, studies identified the rise of vaccine hesitancy online [5,6] and cataloged the techniques in antivaccination misinformation transmission [7,8]. These studies paralleled the early days of the World Wide Web and focused solely on analyzing web pages. An early study by Zimmerman et al [6] found that misrepresentation (twisting of content) was a method for conveying vaccine misinformation. Kata [8] elaborated on misrepresentation in her study in 2010 on childhood vaccination. In her study, misinformation, a sort of misrepresentation, was a main antivaccination theme that arose in classification. Under the misinformation umbrella, she found that using outdated sources; misrepresenting facts; self-referencing to “experts”; not referencing statistics or citations; and making unsupported statements were all ways of passing misinformation. The use of negative tones is also a method of strengthening antivaccination methods [9].

As misinformation became more prevalent on social media, exacerbated by the terming of “fake news” and a global pandemic, more research was conducted on clarifying the definition of misinformation and classifying the different kinds of misinformation. The term “misinformation” is often used interchangeably with related concepts such as spam, rumors, fake news, and disinformation. In this study, we use “misinformation” as an umbrella term to encompass all false or inaccurate information disseminated through social media. Under these umbrellas, there are different types of misinformation. Some information can be fabricated content, while some can be manipulated to be clickbait. Some can be satire or parody being passed off as true, and some can be propaganda [10]. Information can be passed off as true via sponsors and be partially true or totally false. New modes of producing synthetic media through artificial intelligence such as “deep fakes” distort reality by passing as true. Although there is nuance between the different types of misinformation, all types are distinct from true information, which is not deliberately fabricated with malicious intent and does not contain false (scientific) information.

There are several studies on classifying types of vaccine misinformation. Wu et al [11], in their study on general misinformation in social media, tentatively organized misinformation into nonexclusive categories such as unintentionally and intentionally spread misinformation, urban legend, fake news, unverified information, rumor, crowdturfing, spam, and trolling. Zhao et al’s [12] study does not deviate far from this. They identified that misinformation could include conspiracies; concerns about vaccine safety and efficacy; a flat rejection of all vaccines; morality (including religion and human experimenting); and a violation of civil liberties. Classification studies also identified new modes of transmission. Basch et al [13] studied the types of misinformation on TikTok, a medium mixing audiovisual and textual cues, and found that parodying or the overemphasis of false consequences of available vaccines were all methods of strengthening an antivaccination narrative.

In addition to modes of transmission and classification, some studies have focused on linking misinformation to vaccination intent, although these are sparse. One study used a questionnaire to identify that COVID-19 vaccine hesitancy mediated the relationship between vaccine knowledge and vaccination intention, with misinformation on vaccines being associated with higher vaccine hesitancy. The same study also claimed that most respondents were exposed to COVID-19 misinformation [14]. Another study conducted a randomized controlled trial, finding that exposure to recent misinformation induced a decline in intent of vaccination. This is especially true when the misinformation is transmitted as misrepresenting facts of a scientific nature [15].

In the studies on vaccine misinformation, there are unexplored avenues. Few studies have examined how vaccine misinformation diffuses online, both in time and in quantity. Diffusion is a communication process whereby information is communicated over time “among the members of a social system” [16]. Often, this process is measured in terms of depth, breadth, and speed [17,18]. Diffusion depth refers to the length of the longest transmission chain, breadth refers to the capacity to generate offspring posts, and speed refers to the efficiency of the information diffusion process [19]. Understanding the diffusion process between true information and misinformation is important because, should approaches such as psychological inoculation [20] or other vaccination community strategies work [21,22], understanding the scale of spread and when to intervene diminishes the spread.

In addition, no studies have studied key users’ roles in spreading vaccine misinformation. This can be broken down into 2 symbiotic perspectives. One perspective is looking at how individuals’ engagements online may be polarizing or divisive. Previous research on polarity focused on this mostly in politics, using different metrics of measurement for polarity. One study assessed political polarization using interactional, positional, and affective polarity across various platforms to find that polarization is platform dependent [23]. Another study looked at affective polarization against the feminist cause following protests [24]. In the vaccination literature, polarity mostly focuses on differences in vaccine sentiment [4,25,26], aligning it with affective polarization. However, there are a paucity of studies analyzing polarity in engagement with misinformation. This is important because the deliberate exclusion of true perspectives on vaccination may affect the downstream intention to vaccinate. Echo chambers that form around misinformation are thus one avenue worth exploring. Previous research has found that echo chambers are related closely to misinformation diffusion [27] and that misinformation disseminated by echo chambers spread more virally than misinformation not distributed by them [28].

The second perspective is identifying who the key users are in this process. One can see how the polarity in the endorsement of misinformation can be exponentially more problematic when the users doing the exclusion are both laypeople [29] and central in the network. Two studies found that information diffused by “brokers”—those who have high betweenness centrality—affects the final size of information cascades [30,31]. For health information, the diffusion size of posts by the US Centers for Disease Control and Prevention (CDC) were related to broker involvement [32]. More work can thus be done on understanding how key users aid in spreading vaccine misinformation through the deliberate exclusion of “true news,” thereby enhancing its dissemination.

The consequence of having these key users spread information lies in the fact that individual beliefs are shaped by their immediate social network, both online and offline [33]. These worlds feed back to each other, entrenching belief. Scholars have found that dissemination of misinformation is driven by social reinforcement in an individual’s digital circles [34]. If a person’s network consists of mostly rumormongers, that person in turn likely propagates the same misinformation. This propagation often spills into the offline space, influencing judgment through reinforcing the legitimacy of the information online [35], eventually creating fissures along sociodemographic lines, further polarizing the offline world [36]. Should this process occur for vaccine information, it would exacerbate the issue of vaccine hesitancy and have negative implications for vaccination. Few studies have examined the upstream part of this process, including in non-Western contexts [36]. This study focuses on vaccine misinformation on a local forum in Taiwan (PTT) from 3 aspects: identifying the types of misinformation, describing how it diffuses relative to regular news, and assessing how influential users fall into chambers of misinformation, affecting its spread.

Research Questions

This study uses the same PTT data to address the following questions about misinformation. First, we will determine what types of misinformation are present in a Taiwanese online community. This identification and categorization archives how misinformation topics differ to other contexts considering the sociopolitical context of Taiwan. Second, the study will determine how misinformation cascades differ from true information by dichotomizing the topics, comparing their breadth [19]. Finally, we will assess to what extent the echo chamber phenomenon exists in the diffusion of misinformation and how influential users, as a proxy for understanding key spreaders of misinformation, affect the echo chamber in the network. The findings of this study can inform how Taiwan can combat growing misinformation in its diverse online ecology.


Data Procurement

PTT is a terminal-based bulletin board system developed in Taiwan in 1995. Functioning like an internet forum, PTT is often termed Taiwan’s “Reddit” and is one of the most active forums in Taiwan. From July 2022 to July 2023, the average number of users per day was 56,000 [37]. The web-based version of PTT is structured into different boards, which are equivalent to thematic groups, on a forum. Within each board, threads or topics can be started. On each thread, the poster’s metadata such as the username, time of post, and IP address are available for scraping. In addition, within the threads there is a comment section for which the corresponding metadata is also available.

The data for this study were collected from the “Gossiping” board on PTT using a web scraper developed in Python to extract HTML data. The scraping targeted posts made between January 2021 and December 2022. We filtered the dataset using the Chinese term for “vaccine” to ensure we captured vaccine-related threads. The language of all collected data was Mandarin Chinese, the primary language used on PTT. The “Gossiping” board was specifically chosen due to its high activity levels and its reputation for users engaging in a wide range of discussions, including those related to public health, politics, and societal issues. This board is particularly active during times of political or social crises, making it a rich source for analyzing vaccine misinformation during the COVID-19 pandemic. Other thematic boards were excluded to maintain focus on a general-purpose discussion forum where misinformation may more organically spread across a wider audience. In total, 5818 boards were pulled.

Identifying Misinformation

The coding process for classifying vaccine misinformation followed an established framework based on existing literature. We used a broad classification scheme (Table 1) derived from studies on misinformation in digital spaces, which categorize misinformation into types such as conspiracies, fabricated content, fake news, and political propaganda. The classification of posts into misinformation categories was conducted in Mandarin by 2 trained coders, who worked independently to classify an initial sample of 500 posts. The initial codebook was developed using an inductive coding approach, where new categories emerged from the data during this pilot phase. The coders reached a high level of intercoder reliability, with a Cohen κ of 0.91, indicating strong agreement. Discrepancies were resolved through discussion, and the finalized codebook was used to classify the remaining 5318 posts.

Table 1. Labeling scheme for true information and false information.
Category and labelClassification criteria
True information
Factual newsReporting of news events or facts without interpretation or analysis
Scientifically accurate analytical contentInterpreting or analyzing facts or data to provide deeper understanding
Misinformation
Misleading informationFalse or inaccurate information, regardless of intention
DisinformationDeliberately created and shared with the intent to mislead or deceive
PropagandaBiased or misleading information used to promote a political cause or push a certain point of view
Fabricated contentEntirely false content designed to deceive and mislead (create a fake news source, fake quotes, or nonexistent events)
ConspiracyBelief or explanation that something is a result of a secret plot by a group or organization
Mixed misinformationTwo or more misinformation categories
Religious beliefsAny discussion of religion in relation to vaccination
UnrelatedBoards containing “vaccine” keyword but unrelated to vaccine information or misinformation

Any disagreements among coders were resolved through discussion among themselves. The coders informed the principal investigator of these discrepancies, who further approved or reassessed their decision, informing the coders. As a standard reference, we consolidated a library of misinformation on COVID-19 vaccines in Taiwan from three fact-checking organizations in Taiwan: MyGoPen [38], Cofacts [39], and Taiwan FactCheck Center [40]. These organizations, led by civil society movements combatting misinformation, consolidated potentially misinformative news on a variety of topics, including those related to COVID-19. Boards related to the COVID-19 vaccine were filtered by using “vaccine” as a keyword in Chinese. In the event of disagreements on classification, the vaccine-related misinformation on these sites was used as a final check. Following the initial calibration, the remaining 5318 boards were split into 2 datasets (n=2659) and independently coded. Any questions or concerns were brought up to all coders and the principal investigator; the final classification was determined by a majority vote or mutual agreement. For comparing diffusion of misinformation and users’ polarized engagement with misinformation, the categories in Table 1 are further collapsed into “true” and “false” information. Posts unrelated to vaccines or involving purely religious content were excluded from the final analysis. Thematic examples of excluded posts include those focused on general prayer requests, religious teachings not tied to vaccination, or unrelated social and political discussions. For example, posts discussing the afterlife or religious rituals were categorized as unrelated, as they did not directly engage with the vaccine misinformation discourse.

For subsequent analyses, we wanted to remove users who occasionally shared misinformation and focus on users who shared misinformation frequently (ie, we distilled a “core network” by extracting users with over 5 posts; n≥5, n=number of posts). This is also consistent with previous studies that used multiple filtering techniques to remove randomness in a large network [41,42]. To justify the choice of cutoff, robustness tests were conducted, as shown in Multimedia Appendix 1. In short, different threshold values (from 1 to 10) and the average polarity scores of the selected core network were calculated to see any significant changes. Although the average polarity increased as the cutoff increased, the trend of decreasing polarity by cutting out key users was the same across all values. To balance between cutting off too many or too few nodes for a forum, the choice of n≥5 was made.

Diffusion of Misinformation

The diffusion of information is traditionally measured in 3 ways: breadth, depth, and speed. These 3 aspects follow previously defined measurements on diffusion on social media platforms [18]. However, given the structure of the PTT forum, depth is unascertainable. The reason for this is because, in PTT, replies of replied posts always refer to the original post, and not the replied post. This metadata obscures the length of the diffusion chain. Compare this to platforms like X (formerly Twitter) that link the diffusion chain through explicitly stating a “retweet” of a “retweet” is “retweeted” from the “retweet,” and not the original tweet. Due to the structure in PTT, breadth will be used to capture the size of the information diffusion. Breadth is the number of first-degree child nodes that repost it. If we denote a message as m and the set of first-degree child nodes as N(m), the breadth B(m) is equivalent to |N(m)|.

In addition, a negative binomial regression model will estimate the predictors of breadth. A negative binomial regression is used since overdispersion is expected for the distribution of breadth of information diffusion. To build the model, the categorical misinformation types will be input as categorical variables, with “true information” as the baseline. Besides, the polarity score generated in the next section will be used. Polarity is included because it is possible that users who tend to be extreme on either spectrum are likely to have more engagement in the network. In addition, 2 control variables will be used, including the word count of the post and the activeness of the users. Previous research has suggested that longer posts have a higher likelihood of being transmitted [43]. The activeness of a user is operationalized as the number of historical posts. Although activity on a media platform does not necessarily mean higher engagement [44], it is an important confounding factor to be included in the model. The regression model estimate is: log (μi) = β0 + β1(Misinformation) + β2(Disinformation) + β3(Propaganda) + β4(Fabricated) + β5(Propaganda) + β6(WordCount) + β7(UserActiveness) + β8(PolarityScore) + ε.

where μi is the count of the dependent variable for the ith observation of breadth of misinformation spread. The log link function relates the mean of the response variable to the linear predictors. Variables are tested for multicollinearity prior to inclusion into the model. For a deeper analysis into the variable correlations, see Multimedia Appendix 2. Given that there is no offset term in the regression, the results of the negative binomial regression produce a relative risk (RR) indicating the increased risk of the outcome variable.

Measuring Polarity

Polarization in public opinion refers to the extent to which the views of a population (in this case, support for true or false information) are extreme and distinctly divided [45]. In the context of this study, polarity represents the predominant commenting activity on either true or false information (ie, misinformation), suggesting that users are chambered in their engagement with certain types of information. This measure aligns with the “affective polarization” measures previously used in the politics [23,24] and vaccine sentiment literature; however, they are extended to this study to measure endorsement of either true or false information. Further, polarization can be measured at the individual level or aggregated at the community level. In this study, we measure individuals’ level of polarity, proposing 2 metrics for analysis.

The first is polarity measured by the difference in proportion of comments on true information and misinformation (“proportion polarity”). To measure this, we collected the commenting behavior for each node v in the network. With their commenting history, C(v), we calculated the number of comments on true and false information, denoted as Cpos(v) and Cneg(v), respectively. The proportions of positive and negative comments were then Pposv=Cpos(v)C(v) and Pnegv=CnegvC(v), respectively. We subtracted the proportion of negative comments from the proportion of positive comments to get the polarity score, πprop(v), per node using the equation πprop(v) = Ppos(v) – Pneg(v). The range of πprop(v) is –1 ≤ πprop(v) ≤ 1, with a score of −1 representing a polarity in commenting only on false information, 0 representing equal commenting on both, and 1 representing entirely commenting on true information.

The second is polarity measured by the absolute value of the difference in volume of comments on true and false information (“volume polarity”). Measuring by absolute value of the difference removes the true-false dichotomy and allows a straightforward interpretation of any polarity in the network, permitting a clearer interpretation of echo chambering. To calculate volume polarity, πvol(v), we took the absolute value of the difference of the number of negative to positive comments, |Cneg(v) – Cpos(v). The range of πvol(v) is then 0 ≤ πvol(v) < inf, with higher values indicating higher polarity. Figure 1 illustrates the distribution in polarity scores calculated by proportion (Figure 1A) and volume (Figure 1B) for the core network. For ease of interpretation in the negative binomial regression, volume polarity value is min-max normalized to create a value between 0 and 1 due to the expected heavier left skew. To preserve the weight of posting in information diffusion, we use the polarity by volume in our following analyses.

Figure 1. Distribution of polarity scores (n=2422) calculated by (A) proportion and (B) volume for the core network.

Node-Level Predictors of Polarity

Influential users generally have disproportionate impact on the flow of network information, also shown in the previous section. Identifying them and their relation to polarity in the network is therefore an important step in diminishing the spread of misinformation. Measures of centrality are usually used to identify node importance in the network. We used 2 centrality metrics to identify influential nodes, with each representing a different concept of influence.

The first measure of centrality is degree centrality, which directly measures the number of connections a node has. The more connections a node has, the more central and influential it is within the network. In PTT, since forum data is directional, the indegree and outdegree of a node will be calculated for each node separately. Decomposing degrees into indegrees and outdegrees helps distinguish whether nodes on either polar end are more an authority (indegree) or a broadcaster (outdegree) of misinformation.

The second measure of centrality is betweenness centrality, which measure the extent to which a node lies on paths between other nodes; specifically, capturing the frequency with which a node appears on the shortest paths between pairs of nodes. Nodes with high betweenness centrality have control over information flow in the network, acting as “bridges” or “brokers” across the network [46]. These nodes are critical for the flow of information in the network because they function as “switches” for facilitating or inhibiting information flow. For the network, let σst denote the total number of shortest paths from node s to node t, and σst(v) denote the number of those paths that pass through a node v. The betweenness centrality b(v) can be defined as:

svtσst(v)σst

which sums over all pairs of nodes (s,t) in the network, excluding those for which the pair is v, and for each pair, calculates the fraction of shortest paths between s and t that pass through v. Said another way, it calculates the proportion of times v is a bridge along the shortest path between 2 other nodes.

To analyze how polarity in the network changes in response to these influencers in misinformation diffusion, we removed a subset of the percentage of top influential users—by increasing increments of 5%—and calculated and plotted the average absolute value polarity score in the network. The resulting graphs of changing polarity indicate the change in overall posting behavior on either true or misinformative threads by influential users.

Ethical Considerations

This study involved the analysis of secondary data collected from publicly available social media platforms. The data were obtained from PTT, an open-access online forum, and all posts were publicly accessible at the time of data collection. No private or sensitive information was collected, and all user data were anonymized before analysis. The study did not involve any interaction with the individuals behind the social media accounts, and no identifiable information is included in the results. All findings are presented in aggregate to ensure anonymity. As the research only analyzed publicly available data and did not involve human subjects directly, formal ethics approval was not required according to institutional guidelines. Nonetheless, the study adhered to ethical principles of data privacy and responsible handling of social media data.


Identified Misinformation

The results in Table 2 show the number of boards by each misinformation type in Table 1, in addition to an elaboration of several of the most common thread types. Out of 5818 threads, most threads (n=2227, 38.3%) involved netizens asking questions for further clarification on vaccines. The next most common was reporting of official news reports or press conference news (n=1603, 27.8%). For all threads containing misinformation, the most common was the “propaganda” type (n=858, 14.7%), indicating the relatively political nature of this forum. After disregarding the unrelated and religious threads, and recategorizing the “mixed threads into the conspiracy category, there were 3830 true threads and 1601 threads containing misinformation, for a total of 5431 boards for analysis moving forward. For these boards, the number of comments and reposts for true and false information is illustrated in Table 3.

Table 2. Counts and examples of common misinformation types on PTT.
LabelThreads, n (%)Common types foundExample
True information (n=3830, 65.8%)
 Factual news1603 (27.6)Official reports of vaccine side effects from government sites, often containing a “News” tag, with text fully reported. Most report official announcements or press conference news with no expression of opinion.Title: “[News] Free registration now open! Vaccination booth stationed at the PX Mart near Christmas Land”
 Scientifically accurate analytical content2227 (38.3)Nonbiased question-asking by “villagers”a that do not carry any slant in content, or a tone that instigates comments.“Recently, there was a debate on vaccination, which we only heard the opposite side to not vaccinate. Yes, there are always safety concerns for any medical technology, and it is not 100% preventive of reinfection, but are there other reasons?”
Misinformation (n=1601, 27.5%)
 Misleading information452 (7.8)Appears as a question (or comment) but has negative intentions of misleading the public (without understanding the origin of the intention).“We are now allowing children to get Moderna vaccines, but the side effects are large. According to Murphy’s Law, even if the probability is low, it is still possible. If a child does die, after vaccination, who is responsible? … looking at a dazzling object, it’s just tin foil. Behind cute makeup is just powder; even polished nails have dark edges.”
 Disinformation97 (1.7)Linking the vaccine as a direct cause of other diseases or ailments (death, cyborgism, myocarditis, balding, etc), and deriving these conclusions from personal experience.“A lot of people are experiencing side effects from the vaccine, and if something happens, no one takes responsibility. Young people do not need vaccination since their ability to recover is incredibly strong, right? QQ”
 Propaganda858 (14.7)Linking the reasons for vaccination to specific political attitudes. Often have the characteristic of replying to news posts, guided by personal opinions to specific political positions.“The ‘Taliban’b really is ahead internationally, even half of their supporters refuse to roll up their sleeves for Medigen [Taiwan’s domestic vaccine]. But maybe they can use it for a bath? The party can enjoy a good bath in their benevolent fluid”
 Fabricated content195 (3.4)Making nonfactual statements or describing stories related to vaccines based on unwarranted assumptions. Often begins with statements such as “I dreamed that...” or “My friend did...” It is difficult to verify the level of fabrication through text.“Recently, a female college went to her OBGYN after her Moderna injection. She said her menstruation came twice a month. Some people also said after BNT, the heart was uncomfortable, and took a few days to pass. Why are there still people taking the third dose of MRNA? What’s the gossip?”
 Conspiracy179 (3.1)Linking vaccine administration or distribution with the hidden interests of the government and pharmaceutical companies. Mostly done along the lines of them achieving the “benefits” of controlling people if they control vaccine distribution.“The original vaccines are developed using the original virus, but the virus is constantly mutating. Vaccine factories are not quick to develop new ones, and just encourage us to take booster doses. Isn’t this just being done to cheat us of our money? Just like Intel, slowly squeezing out toothpaste, then launching the next generation of products when they can no longer make money.”
 Mixed misinformation29 (0.5)Most cases of 2 or more involved some analytical component, used in a propaganda way.c
Religious beliefs23 (0.4)Adding religious slogans or arguments after raising vaccine-related views. In some texts, elaborated arguments include the viewpoints of a “false world” or “unknown forces.” Could also be categorized as broadly conspiracy, with religious grounding.“I believe [username] now. He said that vaccines are all planned to change human beings. Viruses are man-made and everything is planned. He said the heart used to be on the left side of the body, and now it’s in the center according to Google. I’m convinced. I do not want the vaccine. Do you believe [username]?”
Unrelated155 (2.7)

aVillagers is a common term used on PTT that roughly translates to netizens.

b“Taliban” is a derogatory internet slang term that refers to the Democratic Progressive Party or “green” party in Taiwan. The Mandarin Chinese term substitutes the “li” in “Taliban” for the homophonic sound for “green.”

cNot applicable.

Table 3. Number of comments and reposts by true information and misinformation.
Average number of commentsAverage number of reposts
True information74.010.40
Misinformation52.630.20
 Misleading information52.630.20
 Disinformation75.160.16
 Propaganda66.630.39
 Fabricated content50.210.20
 Conspiracy theories51.410.26

Based on our coding and descriptive analysis of the different types of misinformation, we found that the narratives in Taiwan closely resemble those observed in Western contexts. One example is the conspiracy theory that vaccination is used as a means of control by governments or corporations [47,48]. Broadly speaking, these are symptoms of an overarching lack of trust in authorities on health, and part of the larger antiscience trend [48-52]. This point is potentially exacerbated in geopolitically tense contexts such as Taiwan. One such example was the many narratives of faulty quality control prevalent for the Pfizer BioNTech vaccine, since its distributor for Greater China, Fosun, was Shanghai-based, and vaccines were refused as a result. The frequent discussion about and linking of vaccine decisions and political parties (66.63 comments per board, 0.39 average reposts, Table 3) also corroborates this point, and it is also a trend found in the United States. As a side note, the United States and Taiwan share similar forms of government systems, as well as bipartisan polarization.

The following analyses use the core network. In the entire network, there were 23,004 unique users, with the average polarity (by volume) being 3.27. The core network is relatively smaller and more polarized than the entire network, with 6035 unique users with an average polarity of 8.70.

Diffusion of Misinformation

Overall, the average word count of posts was 525.5 words while the median was 326.0, indicating a heavy right tail. The same trend was observed for the activeness of the user, with an average of 279.3 engagements—any posts or comments—compared to a median of 129.0. The polarity score of the core network is long-tail distributed, and it has an average of 38.2 but a median of 4 (Figure 1). For breadth, among threads identified containing true information, 608 (15.9%) were shared at least once. Conversely, 165 (10.3%) of misinformative threads were reposted. There is no statistical difference between the average repost counts across true information and misinformation, with means of 2.39 and 2.33 reposts for true information and misinformation, respectively (t=0.250; P=.803). The 95% confidence interval for the 2 values ranged from −2.2 to 29.1 for true information threads and −1.1 to 10.3 for misinformation threads.

Table 4 presents negative binomial regression results of predicting breadth of information diffusion. As can be seen, the RR of repost for propaganda information is double that of true information (RR=2.07; P<.001). The risk of reposting disinformation is half that of true information (RR=0.48; P=.001). Moreover, posts from more polarized users (RR=0.22; P<.001) do not arouse as much discussion, suggesting the forum is relatively averse to echo chambering in relation to misinformation.

Table 4. Predictors of breadth (n=2422).
VariablesBreadth
Exp(coef)SEP value
Information type
 True information (baseline)N/Aa
 Misleading information1.100.085.28
 Disinformation0.480.227.001
 Propaganda2.070.063<.001
 Fabricate information0.790.156.12
 Conspiracy theories1.230.132.12
Polarity of user0.220.202<.001
Control variables
 Word count (of post)1.00020.25 × 10–4<.001
 User activeness (number of historical posts)0.9980.53 × 10–4.001

aNot applicable.

Node-Level Predictors of Polarity

We also examined whether removing influential users would contribute to decreasing polarization in the misinformation diffusion network. Figure 2 shows the changes of polarity scores for the entire network and the core network of more than 5 posts. The first finding is that, overall, the entire network is less polarized than the core network, as exemplified by the lower average polarity score. This means most of the active users are polarized; they will post predominantly on exclusively true information or misinformation threads. The second finding is that the average polarity score of the network decreased as the most influential users of the network were removed (the top 5% in the entire network and the top 10% in the core network). The 95% confidence intervals are generated from t tests testing the difference in means, with no overlap suggesting significance.

Figure 2. Impact of removing top percent of influential nodes on network (polarity score calculation: absolute value of volume).

These results suggest several trends. The first is that the core network exhibits more echo chambering in its behavior. Thus, those that are more influential on PTT are more likely to be more polarized, gravitating toward true or false information. However, when interpreted in conjunction with the regression results, posts from polarized users were likely to arouse less attention, suggesting that polarized users are more active and influential in commenting instead of posting. The second is that polarization happens with a few key users. In the core network, when the top 10% of nodes using any metric are cut, the polarity score decreases sharply. After 10%, the rate of decrease tapers off. The steeper declines for outdegree and betweenness centrality suggest that identifying active commenters and brokers in the network may be more useful than identifying those that receive many comments.


Principal Findings

This study aims to explore vaccine misinformation in Taiwan in 3 ways. The first is by cataloging the types of vaccine misinformation encountered during the pandemic. The second is comparing the diffusion of different types of misinformation. The third is measuring how influential users contribute to the level of polarity of the discussion forum. Overall, the results show that propaganda-style misinformation (RR=2.07; P<.05) has wider diffusion, whereas polarized users (RR=0.22; P<.001) received less attention for their posts. In addition, by removing the most influential users in the misinformation diffusion network, measured by indegree, outdegree, and betweenness centrality, the average polarity of the entire network decreased. This suggests a potential strategy to combating the echo chamber effect in misinformation by targeting the influential users.

The results from the regression supplement the literature on misinformation virality [53] that suggests that misinformation type may contribute to differences in diffusion. Misinformation propagated with political propaganda intentions has a higher rate of reposting compared to true information. This, in part, could be due to the nature of the PTT forum, which is a heavily politicized forum. In the broader echo chamber literature on science, there are growing concerns of politically aligned selective exposure in science. A study by Nisbet et al [54] focusing on the United States found that, regardless of political affiliation, there are negative and emotionally loaded reactions to dissonant science communication. The downstream effects of these trends are drastic in that overall trust of the scientific community diminishes. Some studies posit that this is due to a phenomenon of “elite cues” in which public opinion follows the elites’ partisan battles on the specific issue, such as shown in the climate change debate in the United States [55,56]. Another closer example to vaccination is Hamilton and Safford’s study [57] on trust in the US CDC among Republicans following Donald Trump’s changing views toward the CDC. Although this study does not analyze elite cues, the same trend may also be true for Taiwan, which has political systems very similar to the United States in terms of bipartisanship and polarization. A most recent cross-national study further suggests the close relationship between individuals’ voting choices and their conspiracy beliefs during the COVID-19 pandemic [58]. A further study on the narratives in the propaganda stream is one next step toward corroborating these trends in the literature, and enriching it by providing analysis from a different perspective.

Other than propaganda-based misinformation, the finding that disinformation was reposted less also reinforces that certain streams of misinformation may have more viral potential than others. On content, the findings corroborate other literature findings of longer posts being more transmissible. Posts with longer text are more likely to be spread [43]. However, more active users generally have proportionally less of their boards arouse attention, a finding different from other platforms like X [44]. This could be a natural trend for users whose post volumes are high.

Other findings similar to previous studies include the more prevalent engagement with misinformation compared to true information [59-62], as suggested by the skew of the graph in Figure 1A. This is true despite most boards containing true information. However, the diffusion of true information and misinformation is the same, as seen in Table 4. In addition, this study finds that most misinformation streams also tend to be in either vaccine hesitant or antivaccination stance boards [63-65], a finding not surprising given the affinity of misinformation to an antivaccination agenda [66]. Another new finding is that “brokers” of forum information—influential users as measured by betweenness—disproportionately engage with true information or misinformation across the entire network, suggesting that they are the polarizing forces in the echo chambering of misinformation. These results highlight the importance of measuring polarity in endorsement of misinformation for vaccine hesitancy, an aspect previously underexplored in the literature. By analyzing the polarized endorsement of misinformation, we gain insights into the formation of echo chambers of misinformation arising in a forum and can devise strategies for managing them. The findings for PTT suggest 2 main modes of moving forward in vaccine misinformation management for Taiwan.

The first is the implementation of a consolidated, automatic early detection system of online media potentially containing vaccine misinformation. Much like a disease surveillance system, a misinformation surveillance system should be able to detect and flag potential misinformation early such that its transmission is inhibited if necessary. The reason this is useful is because of the seeming affinity that misinformation has in attracting users, which is more dangerous when those users are influential. Once detected, this information can be used to inform the public as a means of “psychological inoculation” to help internet users better discern misinformation [67]. Given that the narratives of misinformation are not novel in Taiwan, this system can be trained using global reports of vaccine-related misinformation in addition to the civilian-led misinformation clarification platforms already present in Taiwan (MyGoPen, CoFacts, etc). This management system would be particularly important during outbreaks or other flashpoints as online information often peaks as a response to events [68-71]. As a step in infodemic management, it would help improve health literacy.

The second is to create a system that can identify then neutralize influential brokers (ie, high betweenness centrality) and commenters (ie, high outdegree) who have high interactions with misinformation posts. This can reduce the polarity in the network away from negative polarity. In this study, their connectivity means that neutralization may reduce the polarity of the network. Identifying users that are either antivaccination or spreaders of misinformation using these metrics represents an untapped potential for positive engagement that continually breaks the echo chamber effect for both vaccine stance and misinformation spread.

Although the technical aspects of such a system are relatively straightforward, the challenge extends beyond just technical regulation and public health, requiring the balancing of ethical considerations in moderating harmful information without impinging on free speech. This moral dilemma has been studied in other contexts, such as politics and culture [72,73], suggesting that the public supports misinformation management if it causes harm, defined as something undermining people’s ability to make informed decisions (particularly around public health and elections) [74]. Studies on the association of misinformation and vaccine intention suggest that they are negatively correlated [15,75]. Other considerations are the scope of required management, such as removal of posts or the temporary or permanent suspension of users. During the COVID-19 pandemic, many social media platforms (eg, Facebook, Instagram) assumed this regulatory role and intervened to prevent the spread of misinformation and conspiracies around vaccines. However, in Taiwan, no such action was taken to actively manage misinformation on local platforms. Rather, the government provided information platforms as secondary references for those already exposed. Moving forward, misinformation management should constitute part of the overall architecture for epidemic management in Taiwan, a process likely to involve fierce democratic discussion or debate on the moral dilemma of speech regulation.

Limitations

A significant limitation of this study is its focus on a single platform, PTT, which may not be representative of the broader online landscape in Taiwan. PTT’s user base is distinct in its demographics and its heavily politicized nature, which could skew the findings on the spread of vaccine misinformation and its interaction with user behavior. As such, the results may not fully capture how misinformation diffuses across other popular platforms like Facebook, LINE, or Instagram, which have different user profiles and communication dynamics. Future research could address this limitation by conducting comparative studies across multiple social media platforms to gain a more comprehensive understanding of misinformation diffusion.

Another limitation stems from the use of social media data, which inherently introduces challenges related to user anonymity and the authenticity of user identities. Social media platforms often allow for pseudonymous accounts, making it difficult to verify whether the key users identified in this study are representative of broader population segments or if they include bots or automated accounts, which are known to play a role in the spread of misinformation. Additionally, social media data are typically unstructured, which presents challenges in accurately capturing the context, sentiment, and intent behind posts. This study relied on manual coding, which, despite high intercoder reliability, is subject to human bias. Incorporating more automated methods, such as natural language processing, could reduce bias and improve the scalability of the analysis. Moreover, the rapidly evolving nature of social media platforms, algorithms, and user behavior means that findings based on past data may not hold in future contexts, making longitudinal studies crucial for a more dynamic understanding of misinformation patterns over time.

Conclusion

This study provides key insights into the dynamics of vaccine misinformation in Taiwan, highlighting the influence of core users in spreading polarizing content and the different diffusion patterns between true information and misinformation. The findings underscore the potential for targeted interventions, such as identifying and neutralizing influential users who propagate misinformation, as a means of reducing network polarization and improving public health outcomes. By addressing the mechanisms of misinformation diffusion and understanding the role of user behavior in its propagation, this research contributes to the broader effort to combat misinformation in digital spaces and enhance public trust in scientific communication. Future studies can build on these findings by extending the analysis to other platforms and exploring cross-national comparisons to generalize the results.

Acknowledgments

This study was supported by the CityUHK Strategic Research Grant (7005826 and 7005823).

Conflicts of Interest

None declared.

Multimedia Appendix 1

Testing different thresholds for the cutoff of the core group.

DOCX File, 729 KB

Multimedia Appendix 2

Correlation analysis for quantitative variables in negative binomial regression.

DOCX File, 57 KB

  1. Bessi A, Coletto M, Davidescu GA, Scala A, Caldarelli G, Quattrociocchi W. Science vs conspiracy: collective narratives in the age of misinformation. PLoS One. 2015;10(2):e0118093. [CrossRef] [Medline]
  2. Colleoni E, Rozza A, Arvidsson A. Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data. J Commun. Apr 2014;64(2):317-332. [CrossRef]
  3. Bessi A, Petroni F, Del Vicario M, et al. Viral misinformation: the role of homophily and polarization. Presented at: WWW ’15: 24th International World Wide Web Conference; May 18-22, 2015:355-356; Florence Italy. [CrossRef]
  4. Schmidt AL, Zollo F, Scala A, Betsch C, Quattrociocchi W. Polarization of the vaccination debate on Facebook. Vaccine (Auckl). Jun 14, 2018;36(25):3606-3612. [CrossRef] [Medline]
  5. Davies P, Chapman S, Leask J. Antivaccination activists on the world wide web. Arch Dis Child. Jul 2002;87(1):22-25. [CrossRef] [Medline]
  6. Zimmerman RK, Wolfe RM, Fox DE, et al. Vaccine criticism on the World Wide Web. J Med Internet Res. Jun 29, 2005;7(2):e17. [CrossRef] [Medline]
  7. Friedlander ER. Opposition to immunization: a pattern of deception. Sci Rev Altern Med. 2001;5(1):18-23.
  8. Kata A. A postmodern Pandora’s box: anti-vaccination misinformation on the Internet. Vaccine (Auckl). Feb 17, 2010;28(7):1709-1716. [CrossRef] [Medline]
  9. Donzelli G, Palomba G, Federigi I, et al. Misinformation on vaccination: a quantitative analysis of YouTube videos. Hum Vaccin Immunother. Jul 3, 2018;14(7):1654-1659. [CrossRef] [Medline]
  10. Tandoc EC, Lim ZW, Ling R. Defining 'fake news'. Digital Journalism. Feb 7, 2018;6(2):137-153. [CrossRef]
  11. Wu L, Morstatter F, Carley KM, et al. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explorations Newsletter. 2019;21(2):80-90. [CrossRef]
  12. Zhao S, Hu S, Zhou X, et al. The prevalence, features, influencing factors, and solutions for COVID-19 vaccine misinformation: systematic review. JMIR Public Health Surveill. Jan 11, 2023;9(1):e40201. [CrossRef] [Medline]
  13. Basch CH, Meleo-Erwin Z, Fera J, Jaime C, Basch CE. A global pandemic in the time of viral memes: COVID-19 vaccine misinformation and disinformation on TikTok. Hum Vaccin Immunother. Aug 3, 2021;17(8):2373-2377. [CrossRef] [Medline]
  14. Lee SK, Sun J, Jang S, Connelly S. Misinformation of COVID-19 vaccines and vaccine hesitancy. Sci Rep. Aug 11, 2022;12(1):13681. [CrossRef] [Medline]
  15. Loomba S, de Figueiredo A, Piatek SJ, de Graaf K, Larson HJ. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat Hum Behav. Mar 2021;5(3):337-348. [CrossRef] [Medline]
  16. Rogers EM. Diffusion of innovations: modifications of a model for telecommunications. Die Diffus von Innov der Telekommunikation. 1995:25-38. [CrossRef]
  17. Van den Bulte C. New product diffusion acceleration: measurement and analysis. Marketing Science. Nov 2000;19(4):366-380. [CrossRef]
  18. Yang J, Counts S. Predicting the speed, scale, and range of information diffusion in Twitter. ICWSM. 2010;4(1):355-358. [CrossRef]
  19. Zhang L, Peng TQ. Breadth, depth, and speed: diffusion of advertising messages on microblogging sites. Internet Research. Jun 1, 2015;25(3):453-470. [CrossRef]
  20. van der Linden S, Dixon G, Clarke C, Cook J. Inoculating against COVID-19 vaccine misinformation. EClinicalMedicine. Mar 2021;33:100772. [CrossRef] [Medline]
  21. Whitehead HS, French CE, Caldwell DM, Letley L, Mounier-Jack S. A systematic review of communication interventions for countering vaccine misinformation. Vaccine (Auckl). Jan 27, 2023;41(5):1018-1034. [CrossRef] [Medline]
  22. Shelby A, Ernst K. Story and science. Hum Vaccin Immunother. Aug 8, 2013;9(8):1795-1801. [CrossRef]
  23. Yarchi M, Baden C, Kligler-Vilenchik N. Political polarization on the digital sphere: a cross-platform, over-time analysis of interactional, positional, and affective polarization on social media. Polit Commun. Mar 15, 2021;38(1-2):98-139. [CrossRef]
  24. Suarez Estrada M, Juarez Y, Piña-García CA. Toxic social media: affective polarization after feminist protests. Social Media + Society. Apr 2022;8(2). [CrossRef]
  25. Cossard A, De Francisci Morales G, Kalimeri K, Mejova Y, Paolotti D, Starnini M. Falling into the echo chamber: the Italian vaccination debate on Twitter. ICWSM. 2020;14:130-140. [CrossRef]
  26. Dunn AG, Leask J, Zhou X, Mandl KD, Coiera E. Associations between exposure to and expression of negative opinions about human papillomavirus vaccines on social media: an observational study. J Med Internet Res. Jun 10, 2015;17(6):e144. [CrossRef] [Medline]
  27. Törnberg P. Echo chambers and viral misinformation: modeling fake news as complex contagion. PLoS One. 2018;13(9):e0203958. [CrossRef] [Medline]
  28. Choi D, Chun S, Oh H, Han J, Kwon TT. Rumor propagation is amplified by echo chambers in social media. Sci Rep. Jan 15, 2020;10(1):310. [CrossRef] [Medline]
  29. Kim I, Valente TW. COVID-19 health communication networks on Twitter: identifying sources, disseminators, and brokers. Connect (Tor). Jan 1, 2020;40(1):129-142. [CrossRef]
  30. Bakshy E, Rosenn I, Marlow C, Adamic L. The role of social networks in information diffusion. 2012. Presented at: WWW 2012; Apr 16-20, 2012:519-528; Lyon, France. [CrossRef]
  31. Weng L, Menczer F, Ahn YY. Virality prediction and community structure in social networks. Sci Rep. 2013;3(1):2522. [CrossRef] [Medline]
  32. Meng J, Peng W, Tan PN, Liu W, Cheng Y, Bae A. Diffusion size and structural virality: the effects of message and network features on spreading health information on Twitter. Comput Human Behav. Dec 2018;89:111-120. [CrossRef] [Medline]
  33. DiFonzo N, Suls J, Beckstead JW, et al. Network structure moderates intergroup differentiation of stereotyped rumors. Soc Cogn. Oct 2014;32(5):409-448. [CrossRef]
  34. De Domenico M, Lima A, Mougel P, Musolesi M. The anatomy of a scientific rumor. Sci Rep. Oct 18, 2013;3(1):2980. [CrossRef] [Medline]
  35. Difonzo N. Conspiracy rumor psychology. In: Conspiracy Theories and the People Who Believe Them. 2018:257-268. [CrossRef]
  36. Fine GA. Rumor, trust and civil society: collective memory and cultures of judgment. Diogenes (Engl ed). Feb 2007;54(1):5-18. [CrossRef]
  37. PTT online user statistics [Website in Chinese]. PTT. URL: https://www.ptt.cc/bbs/Record/search?q=上站人次統計 [Accessed 2023-07-26]
  38. Search results for vaccines. MyGoPen. URL: https://www.mygopen.com/search?q=疫苗 [Accessed 2023-11-01]
  39. Search “vaccine” (Chinese: 真的假的 “疫苗”). Cofacts. URL: https://cofacts.tw/search?type=messages&q=疫苗 [Accessed 2023-11-01]
  40. Taiwan FactCheck Center (台灣事實查核中心). URL: https://tfc-taiwan.org.tw/ [Accessed 2023-11-01]
  41. Zhao X, Wang X, Ma Z, Ma R. Primacy effect of emotions in social stories: user engagement behaviors with breast cancer narratives on Facebook. Comput Human Behav. Dec 2022;137:107405. [CrossRef]
  42. Wang X, Song Y. Viral misinformation and echo chambers: the diffusion of rumors about genetically modified organisms on social media. INTR. Jun 22, 2020;30(5):1547-1564. [CrossRef]
  43. Berger J, Milkman KL. What makes online content viral? Journal of Marketing Research. Apr 2012;49(2):192-205. [CrossRef]
  44. Suh B, Hong L, Pirolli P, Chi EH. Want to be retweeted? Large scale analytics on factors impacting retweet in Twitter network. Presented at: 2010 IEEE Second International Conference on Social Computing; Aug 20-22, 2010; Minneapolis, MN. [CrossRef]
  45. Del Vicario M, Bessi A, Zollo F, et al. The spreading of misinformation online. Proc Natl Acad Sci U S A. Jan 19, 2016;113(3):554-559. [CrossRef] [Medline]
  46. Everett MG, Valente TW. Bridging, brokerage and betweenness. Soc Networks. Jan 2016;44:202-208. [CrossRef] [Medline]
  47. Jiang X, Su MH, Hwang J, et al. Polarization over vaccination: ideological differences in Twitter expression about COVID-19 vaccine favorability and specific hesitancy concerns. Social Media + Society. Jul 2021;7(3). [CrossRef]
  48. Wawrzuta D, Jaworski M, Gotlib J, Panczyk M. What arguments against COVID-19 vaccines run on Facebook in Poland: content analysis of comments. Vaccines (Basel). May 10, 2021;9(5):481. [CrossRef] [Medline]
  49. Tustin JL, Crowcroft NS, Gesink D, Johnson I, Keelan J, Lachapelle B. User-driven comments on a Facebook advertisement recruiting Canadian parents in a atudy on immunization: content analysis. JMIR Public Health Surveill. Sep 20, 2018;4(3):e10090. [CrossRef] [Medline]
  50. Okuhara T, Ishikawa H, Okada M, Kato M, Kiuchi T. Japanese anti- versus pro-influenza vaccination websites: a text-mining analysis. Health Promot Int. Jun 1, 2019;34(3):552-566. [CrossRef] [Medline]
  51. Baines A, Ittefaq M, Abwao M. #Scamdemic, #plandemic, or #scaredemic: what Parler social media platform tells us about COVID-19 vaccine. Vaccines (Basel). Apr 22, 2021;9(5):421. [CrossRef] [Medline]
  52. Lahouati M, De Coucy A, Sarlangue J, Cazanave C. Spread of vaccine hesitancy in France: what about YouTube. Vaccine (Auckl). Aug 10, 2020;38(36):5779-5782. [CrossRef] [Medline]
  53. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. Mar 9, 2018;359(6380):1146-1151. [CrossRef] [Medline]
  54. Nisbet EC, Cooper KE, Garrett RK. The partisan brain: how dissonant science messages lead conservatives and liberals to (dis)trust science. Ann Am Acad Pol Soc Sci. 2015;658(1):36-66. [CrossRef]
  55. Merkley E, Stecula DA. Party elites or manufactured doubt? The informational context of climate change polarization. Sci Commun. Apr 2018;40(2):258-274. [CrossRef]
  56. Brulle RJ, Carmichael J, Jenkins JC. Shifting public opinion on climate change: an empirical assessment of factors influencing concern over climate change in the U.S., 2002–2010. Clim Change. Sep 2012;114(2):169-188. [CrossRef]
  57. Hamilton LC, Safford TG. Elite cues and the rapid decline in trust in science agencies on COVID-19. Sociol Perspect. Oct 2021;64(5):988-1011. [CrossRef]
  58. Lin F, Meng X, Zhi P. Are COVID-19 conspiracy theories for losers? Probing the interactive effect of voting choice and emotional distress on anti-vaccine conspiracy beliefs. Humanit Soc Sci Commun. 2025;12(1):470. URL: https://doi.org/10.1057/s41599-025-04774-3 [Accessed 2025-04-13] [CrossRef]
  59. Hou Z, Tong Y, Du F, et al. Assessing COVID-19 vaccine hesitancy, confidence, and public engagement: a global social listening study. J Med Internet Res. Jun 11, 2021;23(6):e27632. [CrossRef] [Medline]
  60. Yiannakoulias N, Slavik CE, Chase M. Expressions of pro- and anti-vaccine sentiment on YouTube. Vaccine (Auckl). Apr 3, 2019;37(15):2057-2064. [CrossRef] [Medline]
  61. Gori D, Durazzi F, Montalti M, et al. Mis-tweeting communication: a vaccine hesitancy analysis among Twitter users in Italy. Acta Biomed. Oct 5, 2021;92(S6):e2021416. [CrossRef] [Medline]
  62. Karapetiantz P, Audeh B, Lillo-Le Louët A, Bousquet C. Discrepancy between personal experience and negative opinion with human papillomavirus vaccine in web forums. Stud Health Technol Inform. Jun 26, 2020;272:417-420. [CrossRef] [Medline]
  63. Bradshaw AS, Treise D, Shelton SS, et al. Propagandizing anti-vaccination: analysis of Vaccines Revealed documentary series. Vaccine (Auckl). Feb 18, 2020;38(8):2058-2069. [CrossRef] [Medline]
  64. Basch CH, Zybert P, Reeves R, Basch CE. What do popular YouTubeTM videos say about vaccines. Child Care Health Dev. 2017;43(4):499-503. [CrossRef]
  65. Yousefinaghani S, Dara R, Mubareka S, Papadopoulos A, Sharif S. An analysis of COVID-19 vaccine sentiments and opinions on Twitter. Int J Infect Dis. Jul 2021;108:256-262. [CrossRef] [Medline]
  66. Yin JDC. Media data and vaccine hesitancy: scoping review. JMIR Infodemiology. 2022;2(2):e37300. [CrossRef] [Medline]
  67. Roozenbeek J, van der Linden S, Goldberg B, Rathje S, Lewandowsky S. Psychological inoculation improves resilience against misinformation on social media. Sci Adv. Aug 26, 2022;8(34):eabo6254. [CrossRef] [Medline]
  68. Furini M. Identifying the features of ProVax and NoVax groups from social media conversations. Comput Human Behav. Jul 2021;120:106751. [CrossRef]
  69. Diaz P, Reddy P, Ramasahayam R, Kuchakulla M, Ramasamy R. COVID-19 vaccine hesitancy linked to increased internet search queries for side effects on fertility potential in the initial rollout phase following Emergency Use Authorization. Andrologia. Oct 2021;53(9):e14156. [CrossRef] [Medline]
  70. Odone A, Tramutola V, Morgado M, Signorelli C. Immunization and media coverage in Italy: an eleven-year analysis (2007-17). Hum Vaccin Immunother. 2018;14(10):2533-2536. [CrossRef] [Medline]
  71. Deiner MS, Fathy C, Kim J, et al. Facebook and Twitter vaccine sentiment in response to measles outbreaks. Health Informatics J. Sep 2019;25(3):1116-1132. [CrossRef] [Medline]
  72. Kozyreva A, Herzog SM, Lewandowsky S, et al. Resolving content moderation dilemmas between free speech and harmful misinformation. Proc Natl Acad Sci U S A. Feb 14, 2023;120(7):e2210666120. [CrossRef] [Medline]
  73. Lin F, Chen X, Cheng EW. Contextualized impacts of an infodemic on vaccine hesitancy: The moderating role of socioeconomic and cultural factors. Inf Process Manag. 2022;59(5):103013.
  74. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a single market for digital services (digital services act) AND amending directive 2000/31/EC. European Union. URL: https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A52020PC0825 [Accessed 2024-01-09]
  75. Chen YP, Chen YY, Yang KC, et al. The prevalence and impact of fake news on COVID-19 vaccination in Taiwan: retrospective study of digital media. J Med Internet Res. Apr 26, 2022;24(4):e36830. [CrossRef] [Medline]


CDC: Centers for Disease Control and Prevention
RR: relative risk


Edited by Tim Mackey; submitted 01.03.24; peer-reviewed by Shuai Zhang, Thiago Cerqueira-Silva; final revised version received 14.10.24; accepted 24.10.24; published 18.08.25.

Copyright

© Jason Dean-Chen Yin, Tzu-Chin Wu, Chia-Yun Chen, Fen Lin, Xiaohui Wang. Originally published in JMIR Infodemiology (https://infodemiology.jmir.org), 18.8.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Infodemiology, is properly cited. The complete bibliographic information, a link to the original publication on https://infodemiology.jmir.org/, as well as this copyright and license information must be included.