Ilona Fridman, Dahlia Boyles, Ria Chheda, Carrie Baldwin-SoRelle, Angela B Smith, Jennifer Elston Lafata
{"title":"Identifying Misinformation About Unproven Cancer Treatments on Social Media Using User-Friendly Linguistic Characteristics: Content Analysis.","authors":"Ilona Fridman, Dahlia Boyles, Ria Chheda, Carrie Baldwin-SoRelle, Angela B Smith, Jennifer Elston Lafata","doi":"10.2196/62703","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Health misinformation, prevalent in social media, poses a significant threat to individuals, particularly those dealing with serious illnesses such as cancer. The current recommendations for users on how to avoid cancer misinformation are challenging because they require users to have research skills.</p><p><strong>Objective: </strong>This study addresses this problem by identifying user-friendly characteristics of misinformation that could be easily observed by users to help them flag misinformation on social media.</p><p><strong>Methods: </strong>Using a structured review of the literature on algorithmic misinformation detection across political, social, and computer science, we assembled linguistic characteristics associated with misinformation. We then collected datasets by mining X (previously known as Twitter) posts using keywords related to unproven cancer therapies and cancer center usernames. This search, coupled with manual labeling, allowed us to create a dataset with misinformation and 2 control datasets. We used natural language processing to model linguistic characteristics within these datasets. Two experiments with 2 control datasets used predictive modeling and Lasso regression to evaluate the effectiveness of linguistic characteristics in identifying misinformation.</p><p><strong>Results: </strong>User-friendly linguistic characteristics were extracted from 88 papers. The short-listed characteristics did not yield optimal results in the first experiment but predicted misinformation with an accuracy of 73% in the second experiment, in which posts with misinformation were compared with posts from health care systems. The linguistic characteristics that consistently negatively predicted misinformation included tentative language, location, URLs, and hashtags, while numbers, absolute language, and certainty expressions consistently predicted misinformation positively.</p><p><strong>Conclusions: </strong>This analysis resulted in user-friendly recommendations, such as exercising caution when encountering social media posts featuring unwavering assurances or specific numbers lacking references. Future studies should test the efficacy of the recommendations among information users.</p>","PeriodicalId":73554,"journal":{"name":"JMIR infodemiology","volume":"5 ","pages":"e62703"},"PeriodicalIF":3.5000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR infodemiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/62703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Health misinformation, prevalent in social media, poses a significant threat to individuals, particularly those dealing with serious illnesses such as cancer. The current recommendations for users on how to avoid cancer misinformation are challenging because they require users to have research skills.
Objective: This study addresses this problem by identifying user-friendly characteristics of misinformation that could be easily observed by users to help them flag misinformation on social media.
Methods: Using a structured review of the literature on algorithmic misinformation detection across political, social, and computer science, we assembled linguistic characteristics associated with misinformation. We then collected datasets by mining X (previously known as Twitter) posts using keywords related to unproven cancer therapies and cancer center usernames. This search, coupled with manual labeling, allowed us to create a dataset with misinformation and 2 control datasets. We used natural language processing to model linguistic characteristics within these datasets. Two experiments with 2 control datasets used predictive modeling and Lasso regression to evaluate the effectiveness of linguistic characteristics in identifying misinformation.
Results: User-friendly linguistic characteristics were extracted from 88 papers. The short-listed characteristics did not yield optimal results in the first experiment but predicted misinformation with an accuracy of 73% in the second experiment, in which posts with misinformation were compared with posts from health care systems. The linguistic characteristics that consistently negatively predicted misinformation included tentative language, location, URLs, and hashtags, while numbers, absolute language, and certainty expressions consistently predicted misinformation positively.
Conclusions: This analysis resulted in user-friendly recommendations, such as exercising caution when encountering social media posts featuring unwavering assurances or specific numbers lacking references. Future studies should test the efficacy of the recommendations among information users.