Sacha Altay, M. Berriche, Hendrik Heuer, J. Farkas, Steven Rathje
We surveyed 150 academic experts on misinformation and identified areas of expert consensus. Experts defined misinformation as false and misleading information, though views diverged on the importance of intentionality and what exactly constitutes misinformation. The most popular reason why people believe and share misinformation was partisanship, while lack of education was one of the least popular reasons. Experts were optimistic about the effectiveness of interventions against misinformation and supported system-level actions against misinformation, such as platform design changes and algorithmic changes. The most agreed-upon future direction for the field of misinformation was to collect more data outside of the United States.
{"title":"A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field","authors":"Sacha Altay, M. Berriche, Hendrik Heuer, J. Farkas, Steven Rathje","doi":"10.37016/mr-2020-119","DOIUrl":"https://doi.org/10.37016/mr-2020-119","url":null,"abstract":"We surveyed 150 academic experts on misinformation and identified areas of expert consensus. Experts defined misinformation as false and misleading information, though views diverged on the importance of intentionality and what exactly constitutes misinformation. The most popular reason why people believe and share misinformation was partisanship, while lack of education was one of the least popular reasons. Experts were optimistic about the effectiveness of interventions against misinformation and supported system-level actions against misinformation, such as platform design changes and algorithmic changes. The most agreed-upon future direction for the field of misinformation was to collect more data outside of the United States.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48225883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.
{"title":"Older Americans are more vulnerable to prior exposure effects in news evaluation","authors":"Benjamin A. Lyons","doi":"10.37016/mr-2020-118","DOIUrl":"https://doi.org/10.37016/mr-2020-118","url":null,"abstract":"Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46881759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amid concerns about misinformation online and bias in news, there are increasing calls on social media to “do your own research.” In an abundant information environment, critical media consumption and information validation are desirable. However, using panel survey data, we find that positive perceptions toward “doing your own research” are associated with holding more misperceptions about COVID-19 and less trust in science over time. Support for “doing your own research” may be an expression of anti-expert attitudes rather than reflecting beliefs about the importance of cautious information consumption.
{"title":"Support for “doing your own research” is associated with COVID-19 misperceptions and scientific mistrust","authors":"Sedona Chinn, Ariel Hasell","doi":"10.37016/mr-2020-117","DOIUrl":"https://doi.org/10.37016/mr-2020-117","url":null,"abstract":"Amid concerns about misinformation online and bias in news, there are increasing calls on social media to “do your own research.” In an abundant information environment, critical media consumption and information validation are desirable. However, using panel survey data, we find that positive perceptions toward “doing your own research” are associated with holding more misperceptions about COVID-19 and less trust in science over time. Support for “doing your own research” may be an expression of anti-expert attitudes rather than reflecting beliefs about the importance of cautious information consumption.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49209108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samikshya Siwakoti, Jacob N. Shapiro, Nathan Evans
As progress on vaccine rollout in the United States slowed down in Spring 2021, it became clear that anti-vaccine information posed a public health threat. Using text data from 5,613 distinct COVID misinformation stories and 70 anti-vaccination Facebook groups, we tracked highly salient keywords regarding anti-vaccine discourse across Twitter, thousands of news websites, and the Google and Bing search engines from May through June 2021, a key period when progress on vaccinations very clearly stalled. Granger causality tests showed that searches for anti-vaccination terms on Google as well as the appearance of these terms on Twitter followed spikes in their appearance in less reliable media sites, but not discussion in the mainstream press.
{"title":"Less reliable media drive interest in anti-vaccine information","authors":"Samikshya Siwakoti, Jacob N. Shapiro, Nathan Evans","doi":"10.37016/mr-2020-116","DOIUrl":"https://doi.org/10.37016/mr-2020-116","url":null,"abstract":"As progress on vaccine rollout in the United States slowed down in Spring 2021, it became clear that anti-vaccine information posed a public health threat. Using text data from 5,613 distinct COVID misinformation stories and 70 anti-vaccination Facebook groups, we tracked highly salient keywords regarding anti-vaccine discourse across Twitter, thousands of news websites, and the Google and Bing search engines from May through June 2021, a key period when progress on vaccinations very clearly stalled. Granger causality tests showed that searches for anti-vaccination terms on Google as well as the appearance of these terms on Twitter followed spikes in their appearance in less reliable media sites, but not discussion in the mainstream press.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47105277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrícia G. C. Rossini, Camila Mont’Alverne, Antonis Kalogeropoulos
The 2022 elections in Brazil have demonstrated that disinformation can have violent consequences, particularly when it comes from the top, raising concerns around democratic backsliding. This study leverages a two-wave survey to investigate individual-level predictors of holding electoral misinformation beliefs and the role of trust and information habits during the 2022 Brazilian elections. Our findings demonstrate that susceptibility to electoral misinformation is affected by factors such as political ideology, trust in the electoral process and democratic institutions, and information consumption, with those who participate in political groups in messaging apps being more likely to believe in electoral misinformation.
{"title":"Explaining beliefs in electoral misinformation in the 2022 Brazilian election: The role of ideology, political trust, social media, and messaging apps","authors":"Patrícia G. C. Rossini, Camila Mont’Alverne, Antonis Kalogeropoulos","doi":"10.37016/mr-2020-115","DOIUrl":"https://doi.org/10.37016/mr-2020-115","url":null,"abstract":"The 2022 elections in Brazil have demonstrated that disinformation can have violent consequences, particularly when it comes from the top, raising concerns around democratic backsliding. This study leverages a two-wave survey to investigate individual-level predictors of holding electoral misinformation beliefs and the role of trust and information habits during the 2022 Brazilian elections. Our findings demonstrate that susceptibility to electoral misinformation is affected by factors such as political ideology, trust in the electoral process and democratic institutions, and information consumption, with those who participate in political groups in messaging apps being more likely to believe in electoral misinformation.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46846136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Bhargava, Katie MacDonald, Christie Newton, Hause Lin, Gordon Pennycook
TikTok provides opportunity for citizen-led debunking where users correct other users’ misinformation. In the present study (N=1,169), participants either watched and rated the credibility of (1) a misinformation video, (2) a correction video, or (3) a misinformation video followed by a correction video (“debunking”). Afterwards, participants rated both a factual and a misinformation video about the same topic and judged the accuracy of the claim furthered by the misinformation video. We found modest evidence for the effectiveness of debunking on people’s ability to subsequently discern between true and false videos, but stronger evidence on subsequent belief in the false claim itself.
{"title":"How effective are TikTok misinformation debunking videos?","authors":"P. Bhargava, Katie MacDonald, Christie Newton, Hause Lin, Gordon Pennycook","doi":"10.37016/mr-2020-114","DOIUrl":"https://doi.org/10.37016/mr-2020-114","url":null,"abstract":"TikTok provides opportunity for citizen-led debunking where users correct other users’ misinformation. In the present study (N=1,169), participants either watched and rated the credibility of (1) a misinformation video, (2) a correction video, or (3) a misinformation video followed by a correction video (“debunking”). Afterwards, participants rated both a factual and a misinformation video about the same topic and judged the accuracy of the claim furthered by the misinformation video. We found modest evidence for the effectiveness of debunking on people’s ability to subsequently discern between true and false videos, but stronger evidence on subsequent belief in the false claim itself.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42410739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent evidence suggests that prompting users to consider the accuracy of online posts increases the quality of news they share on social media. Here we examine how accuracy prompts affect user behavior in a more realistic context, and whether their effect can be enhanced by using colored borders to differentiate news from social content. Our results show that accuracy prompts increase news-sharing quality without affecting sharing of social (non-news) posts or “liking” behavior. We also find that adding colored borders around news posts increased overall engagement with news regardless of veracity, and decreased engagement with social posts.
{"title":"Examining accuracy-prompt efficacy in combination with using colored borders to differentiate news and social content online","authors":"Venya Bhardwaj, Cameron Martel, David G. Rand","doi":"10.37016/mr-2020-113","DOIUrl":"https://doi.org/10.37016/mr-2020-113","url":null,"abstract":"Recent evidence suggests that prompting users to consider the accuracy of online posts increases the quality of news they share on social media. Here we examine how accuracy prompts affect user behavior in a more realistic context, and whether their effect can be enhanced by using colored borders to differentiate news from social content. Our results show that accuracy prompts increase news-sharing quality without affecting sharing of social (non-news) posts or “liking” behavior. We also find that adding colored borders around news posts increased overall engagement with news regardless of veracity, and decreased engagement with social posts.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47663286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Kremlin’s use of bots and trolls to manipulate the recommendation algorithms of social media platforms is well-documented by many journalists and researchers. However pro-Kremlin manipulation of search engine algorithms has rarely been explored. We examine pro-Kremlin attempts to manipulate search engine results by comparing backlink and keyphrase networks of US, European, and Russian think tanks, as well as Kremlin-linked “pseudo” think tanks that target Western audiences. Our evidence suggests that pro-Kremlin pseudo-think tanks are being artificially boosted and co-amplified by a network of low-quality websites that generate millions of backlinks to these target websites. We find that Google’s search algorithm appears to be penalizing Russian and pseudo-think tank domains.
{"title":"Search engine manipulation to spread pro-Kremlin propaganda","authors":"Evan M. Williams, Kathleen M. Carley","doi":"10.37016/mr-2020-112","DOIUrl":"https://doi.org/10.37016/mr-2020-112","url":null,"abstract":"The Kremlin’s use of bots and trolls to manipulate the recommendation algorithms of social media platforms is well-documented by many journalists and researchers. However pro-Kremlin manipulation of search engine algorithms has rarely been explored. We examine pro-Kremlin attempts to manipulate search engine results by comparing backlink and keyphrase networks of US, European, and Russian think tanks, as well as Kremlin-linked “pseudo” think tanks that target Western audiences. Our evidence suggests that pro-Kremlin pseudo-think tanks are being artificially boosted and co-amplified by a network of low-quality websites that generate millions of backlinks to these target websites. We find that Google’s search algorithm appears to be penalizing Russian and pseudo-think tank domains.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49429162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines strategies for making misinformation interventions responsive to four communities of color. Using qualitative focus groups with members of four non-profit organizations, we worked with community leaders to identify misinformation narratives, sources of exposure, and effective intervention strategies in the Asian American Pacific Islander (AAPI), Black, Latino, and Native American communities. Analyzing the findings from those focus groups, we identified several pathways through which misinformation prevention efforts can be more equitable and effective. Building from our findings, we propose steps practitioners, academics, and policymakers can take to better address the misinformation crisis within communities of color. We illustrate how these recommendations can be put into practice through examples from workshops co-designed with a non-profit working on disinformation and media literacy.
{"title":"Designing misinformation interventions for all: Perspectives from AAPI, Black, Latino, and Native American community leaders on misinformation educational efforts","authors":"Angela Y. Lee, Ryan C. Moore, Jeffrey T. Hancock","doi":"10.37016/mr--2020-111","DOIUrl":"https://doi.org/10.37016/mr--2020-111","url":null,"abstract":"This paper examines strategies for making misinformation interventions responsive to four communities of color. Using qualitative focus groups with members of four non-profit organizations, we worked with community leaders to identify misinformation narratives, sources of exposure, and effective intervention strategies in the Asian American Pacific Islander (AAPI), Black, Latino, and Native American communities. Analyzing the findings from those focus groups, we identified several pathways through which misinformation prevention efforts can be more equitable and effective. Building from our findings, we propose steps practitioners, academics, and policymakers can take to better address the misinformation crisis within communities of color. We illustrate how these recommendations can be put into practice through examples from workshops co-designed with a non-profit working on disinformation and media literacy.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44690725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Kling, F. Toepfl, Neil J. Thurman, Richard Fletcher
Following Russia’s invasion of Ukraine, policymakers worldwide have taken measures to curb the reach of Russia’s foreign communication outlets, RT and Sputnik. Mapping the audiences of these outlets in 21 countries, we show that in the quarter before the invasion, at least via their official websites and mobile apps, neither outlet reached more than 5% of the digital populations of any of these countries each month. Averaged across all countries, both outlets’ website and mobile app reach remained approximately constant between 2019 and 2021, was higher for men, and increased with audiences’ age.
{"title":"Mapping the website and mobile app audiences of Russia’s foreign communication outlets, RT and Sputnik, across 21 countries","authors":"Julia Kling, F. Toepfl, Neil J. Thurman, Richard Fletcher","doi":"10.37016/mr-2020-110","DOIUrl":"https://doi.org/10.37016/mr-2020-110","url":null,"abstract":"Following Russia’s invasion of Ukraine, policymakers worldwide have taken measures to curb the reach of Russia’s foreign communication outlets, RT and Sputnik. Mapping the audiences of these outlets in 21 countries, we show that in the quarter before the invasion, at least via their official websites and mobile apps, neither outlet reached more than 5% of the digital populations of any of these countries each month. Averaged across all countries, both outlets’ website and mobile app reach remained approximately constant between 2019 and 2021, was higher for men, and increased with audiences’ age.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49412785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}