Recent Articles
Misinformation represents an evolutionary paradox: despite its harmful impact on society, it persists and evolves, thriving in the information-rich environment of the digital age. This paradox challenges the conventional expectation that detrimental entities should diminish over time. The persistence of misinformation, despite advancements in fact-checking and verification tools, suggests that it possesses adaptive qualities that enable it to survive and propagate. This paper explores how misinformation, as a blend of truth and fiction, continues to resonate with audiences. The role of narratives in human history, particularly in the evolution of Homo narrans, underscores the enduring influence of storytelling on cultural and social cohesion. Despite the increasing ability of individuals to verify the accuracy of sources, misinformation remains a significant challenge, often spreading rapidly through digital platforms. Current behavioral research tends to treat misinformation as completely irrational, static, finite entities that can be definitively debunked, overlooking their dynamic and evolving nature. This approach limits our understanding of the behavioral and societal factors driving the transformation of misinformation over time. The persistence of misinformation can be attributed to several factors, including its role in fostering social cohesion, its perceived short-term benefits, and its use in strategic deception. Techniques such as extrapolation, intrapolation, deformation, cherry-picking, and fabrication contribute to the production and spread of misinformation. Understanding these processes and the evolutionary advantages they confer is crucial for developing effective strategies to counter misinformation. By promoting transparency, critical thinking, and accurate information, society can begin to address the root causes of misinformation and create a more resilient information environment.
Understanding advocacy strategies is essential to improving dementia awareness, reducing stigma, supporting cognitive health promotion, and influencing policy to support people living with dementia. However, there is a dearth of evidence-based research on advocacy strategies used to support dementia awareness.
Spontaneous pharmacovigilance reporting systems are the main data source for signal detection for vaccines. However, there is a large time lag between the occurrence of an adverse event (AE) and the availability for analysis. With global mass COVID-19 vaccination campaigns, social media, and web content, there is an opportunity for real-time, faster monitoring of AEs potentially related to COVID-19 vaccine use. Our work aims to detect AEs from social media to augment those from spontaneous reporting systems.
After the US Supreme Court overturned Roe v. Wade, confusion followed regarding the legality of abortion in different states across the country. Recent studies found increased Google searches for abortion-related terms in restricted states after the Dobbs v. Jackson Women’s Health Organization decision was leaked. As patients and providers use Wikipedia (Wikimedia Foundation) as a predominant medical information source, we hypothesized that changes in reproductive health information-seeking behavior could be better understood by examining Wikipedia article traffic.
The complex interplay between sleep-related information—both accurate and misleading—and its impact on clinical public health is an emerging area of concern. Lack of awareness of the importance of sleep, and inadequate information related to sleep, combined with misinformation about sleep, disseminated through social media, nonexpert advice, commercial interests, and other sources, can distort individuals’ understanding of healthy sleep practices. Such misinformation can lead to the adoption of unhealthy sleep behaviors, reducing sleep quality and exacerbating sleep disorders. Simultaneously, poor sleep itself impairs critical cognitive functions, such as memory consolidation, emotional regulation, and decision-making. These impairments can heighten individuals’ vulnerability to misinformation, creating a vicious cycle that further entrenches poor sleep habits and unhealthy behaviors. Sleep deprivation is known to reduce the ability to critically evaluate information, increase suggestibility, and enhance emotional reactivity, making individuals more prone to accepting persuasive but inaccurate information. This cycle of misinformation and poor sleep creates a clinical public health issue that goes beyond individual well-being, influencing occupational performance, societal productivity, and even broader clinical public health decision-making. The effects are felt across various sectors, from health care systems burdened by sleep-related issues to workplaces impacted by decreased productivity due to sleep deficiencies. The need for comprehensive clinical public health initiatives to combat this cycle is critical. These efforts must promote sleep literacy, increase awareness of sleep’s role in cognitive resilience, and correct widespread sleep myths. Digital tools and technologies, such as sleep-tracking devices and artificial intelligence–powered apps, can play a role in educating the public and enhancing the accessibility of accurate, evidence-based sleep information. However, these tools must be carefully designed to avoid the spread of misinformation through algorithmic biases. Furthermore, research into the cognitive impacts of sleep deprivation should be leveraged to develop strategies that enhance societal resilience against misinformation. Sleep infodemiology and infoveillance, which involve tracking and analyzing the distribution of sleep-related information across digital platforms, offer valuable methodologies for identifying and addressing the spread of misinformation in real time. Addressing this issue requires a multidisciplinary approach, involving collaboration between sleep scientists, health care providers, educators, policy makers, and digital platform regulators. By promoting healthy sleep practices and debunking myths, it is possible to disrupt the feedback loop between poor sleep and misinformation, leading to improved individual health, better decision-making, and stronger societal outcomes.
Social media has become a vital tool for health care providers to quickly share information. However, its lack of content curation and expertise poses risks of misinformation and premature dissemination of unvalidated data, potentially leading to widespread harmful effects due to the rapid and large-scale spread of incorrect information.
Video games have rapidly become mainstream in recent decades, with over half of the US population involved in some form of digital gaming. However, concerns regarding the potential harms of excessive, disordered gaming have also risen. Internet gaming disorder (IGD) has been proposed as a tentative psychiatric disorder that requires further study by the American Psychological Association (APA) and is recognized as a behavioral addiction by the World Health Organization. Substance use among gamers has also become a concern, with caffeinated or energy drinks and prescription stimulants commonly used for performance enhancement.
During the COVID-19 pandemic, the rapid spread of misinformation on social media created significant public health challenges. Large language models (LLMs), pretrained on extensive textual data, have shown potential in detecting misinformation, but their performance can be influenced by factors such as prompt engineering (ie, modifying LLM requests to assess changes in output). One form of prompt engineering is role-playing, where, upon request, OpenAI’s ChatGPT imitates specific social roles or identities. This research examines how ChatGPT’s accuracy in detecting COVID-19–related misinformation is affected when it is assigned social identities in the request prompt. Understanding how LLMs respond to different identity cues can inform messaging campaigns, ensuring effective use in public health communications.
Following the signing of the Tobacco 21 Amendment (T21) in December 2019 to raise the minimum legal age for the sale of tobacco products from 18 to 21 years in the United States, there is a need to monitor public responses and potential unintended consequences. Social media platforms, such as Twitter (subsequently rebranded as X), can provide rich data on public perceptions.