In this pre-registered experiment, we test the effects of conspiracy cue content in the context of the 2020 U.S. elections. Specifically, we varied whether respondents saw an explicitly stated conspiracy theory, one that was merely implied, or none at all. We found that explicit cues about rigged voting machines increase belief in such theories, especially when the cues target the opposing political party. Explicit cues also decrease confidence in elections regardless of the targeted party, but they have no effect on satisfaction with democracy or support for election security funding. Thus, conspiratorial cues can decrease confidence in institutions, even among the out-party and irrespective of a change in conspiracy beliefs. The results demonstrate that even in a landscape saturated in claims of fraud, voters still respond to novel explicit cues.
{"title":"Research note: Explicit voter fraud conspiracy cues increase belief among co-partisans but have broader spillover effects on confidence in elections","authors":"Benjamin A. Lyons, Kaitlyn S. Workman","doi":"10.37016/mr-2020-99","DOIUrl":"https://doi.org/10.37016/mr-2020-99","url":null,"abstract":"In this pre-registered experiment, we test the effects of conspiracy cue content in the context of the 2020 U.S. elections. Specifically, we varied whether respondents saw an explicitly stated conspiracy theory, one that was merely implied, or none at all. We found that explicit cues about rigged voting machines increase belief in such theories, especially when the cues target the opposing political party. Explicit cues also decrease confidence in elections regardless of the targeted party, but they have no effect on satisfaction with democracy or support for election security funding. Thus, conspiratorial cues can decrease confidence in institutions, even among the out-party and irrespective of a change in conspiracy beliefs. The results demonstrate that even in a landscape saturated in claims of fraud, voters still respond to novel explicit cues.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48523298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines how Russia tailors its vaccine propaganda to hostile and friendly audiences, like Ukraine and Serbia. Web scraping of all articles about vaccines on Russian state-owned websites from December 2020 to November 2021 provided data for quantitative topic modeling and qualitative analysis. This revealed that the Kremlin muddles issues and sows confusion for Ukrainians but feeds Serbians focused, repetitive narratives. Therefore, countering Russian propaganda proactively also requires a tailored approach. Journalists and public communications officials should clarify information and separate unrelated issues in Russia-hostile places like Ukraine but add nuance and context to narratives in Russia-friendly places like Serbia.
{"title":"Clarity for friends, confusion for foes: Russian vaccine propaganda in Ukraine and Serbia","authors":"Katrina Keegan","doi":"10.37016/mr-2020-98","DOIUrl":"https://doi.org/10.37016/mr-2020-98","url":null,"abstract":"This paper examines how Russia tailors its vaccine propaganda to hostile and friendly audiences, like Ukraine and Serbia. Web scraping of all articles about vaccines on Russian state-owned websites from December 2020 to November 2021 provided data for quantitative topic modeling and qualitative analysis. This revealed that the Kremlin muddles issues and sows confusion for Ukrainians but feeds Serbians focused, repetitive narratives. Therefore, countering Russian propaganda proactively also requires a tailored approach. Journalists and public communications officials should clarify information and separate unrelated issues in Russia-hostile places like Ukraine but add nuance and context to narratives in Russia-friendly places like Serbia.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44155054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using survey data from 154,195 respondents in 142 countries, we investigate internet user perceptions of the risks associated with being exposed to misinformation. We find that: 1) The majority of regular internet users globally (58.5%) worry about misinformation, and young and low-income groups are most likely to be concerned. 2) Risk perception among internet users varies starkly across regions whereby concern is highest in Latin America and the Caribbean (74.2%), and lowest in South Asia (31.2%). 3) Differences are unrelated to the prevalence of misinformation, yet concern is highest in countries with liberal democratic governments. We discuss implications for successful policy and platform interventions.
{"title":"Who is afraid of fake news? Modeling risk perceptions of misinformation in 142 countries","authors":"Aleksi Knuutila, Lisa-Maria Neudert, P. Howard","doi":"10.37016/mr-2020-97","DOIUrl":"https://doi.org/10.37016/mr-2020-97","url":null,"abstract":"Using survey data from 154,195 respondents in 142 countries, we investigate internet user perceptions of the risks associated with being exposed to misinformation. We find that: 1) The majority of regular internet users globally (58.5%) worry about misinformation, and young and low-income groups are most likely to be concerned. 2) Risk perception among internet users varies starkly across regions whereby concern is highest in Latin America and the Caribbean (74.2%), and lowest in South Asia (31.2%). 3) Differences are unrelated to the prevalence of misinformation, yet concern is highest in countries with liberal democratic governments. We discuss implications for successful policy and platform interventions.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42667889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using a survey conducted in July 2020, we establish a divide in the news sources partisans prefer for information about the COVID-19 pandemic and observe partisan disagreements in beliefs about the virus. These divides persist when respondents face financial costs for incorrectly answering questions. This supports a view in which the informational divisions revealed in surveys on COVID-19 are genuine differences of opinion, not artifacts of insincere cheerleading. The implication is that efforts to correct misinformation about the virus should focus on changing sincere beliefs while also accounting for information search preferences that impede exposure to correctives among those holding misinformed views.
{"title":"Partisan reasoning in a high stakes environment: Assessing partisan informational gaps on COVID-19","authors":"E. Peterson, S. Iyengar","doi":"10.37016/mr-2020-96","DOIUrl":"https://doi.org/10.37016/mr-2020-96","url":null,"abstract":"Using a survey conducted in July 2020, we establish a divide in the news sources partisans prefer for information about the COVID-19 pandemic and observe partisan disagreements in beliefs about the virus. These divides persist when respondents face financial costs for incorrectly answering questions. This supports a view in which the informational divisions revealed in surveys on COVID-19 are genuine differences of opinion, not artifacts of insincere cheerleading. The implication is that efforts to correct misinformation about the virus should focus on changing sincere beliefs while also accounting for information search preferences that impede exposure to correctives among those holding misinformed views.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44950828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Nguyễn, Rachel Kuo, M. Reddi, Lan Li, R. Moran
Drawing on preliminary research about the spread of mis- and disinformation across Asian diasporic communities, we advocate for qualitative research methodologies that can better examine historical, transnational, multilingual, and intergenerational information networks. Using examples of case studies from Vietnam, Taiwan, China, and India, we discuss research themes and challenges including legacies of multiple imperialisms, nationalisms, and geopolitical tensions as root causes of mis- and disinformation; difficulties in data collection due to private and closed information networks, language translation and interpretation; and transnational dimensions of information infrastructures and media platforms. This commentary introduces key concepts driven by methodological approaches to better study diasporic information networks beyond the dominance of Anglocentrism in existing mis- and disinformation studies.
{"title":"Studying mis- and disinformation in Asian diasporic communities: The need for critical transnational research beyond Anglocentrism","authors":"Sarah Nguyễn, Rachel Kuo, M. Reddi, Lan Li, R. Moran","doi":"10.37016/mr-2020-95","DOIUrl":"https://doi.org/10.37016/mr-2020-95","url":null,"abstract":"Drawing on preliminary research about the spread of mis- and disinformation across Asian diasporic communities, we advocate for qualitative research methodologies that can better examine historical, transnational, multilingual, and intergenerational information networks. Using examples of case studies from Vietnam, Taiwan, China, and India, we discuss research themes and challenges including legacies of multiple imperialisms, nationalisms, and geopolitical tensions as root causes of mis- and disinformation; difficulties in data collection due to private and closed information networks, language translation and interpretation; and transnational dimensions of information infrastructures and media platforms. This commentary introduces key concepts driven by methodological approaches to better study diasporic information networks beyond the dominance of Anglocentrism in existing mis- and disinformation studies.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43207258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathieu Lavigne, É. Bélanger, R. Nadeau, Jean-François Daoust, E. Lachapelle
This research examines how false beliefs shape perceptions of government transparency in times of crisis. Measuring transparency perceptions using both closed- and open-ended questions drawn from a Canadian panel survey, we show that individuals holding false beliefs about COVID-19 are more likely to have negative perceptions of government transparency. They also tend to rely on their false beliefs when asked to justify why they think governments are not being transparent about the pandemic. Our findings suggest that the inability to successfully debunk misinformation could worsen perceptions of government transparency, further eroding political support and contributing to non-compliance with public health directives.
{"title":"Hide and seek: The connection between false beliefs and perceptions of government transparency","authors":"Mathieu Lavigne, É. Bélanger, R. Nadeau, Jean-François Daoust, E. Lachapelle","doi":"10.37016/mr-2020-90","DOIUrl":"https://doi.org/10.37016/mr-2020-90","url":null,"abstract":"This research examines how false beliefs shape perceptions of government transparency in times of crisis. Measuring transparency perceptions using both closed- and open-ended questions drawn from a Canadian panel survey, we show that individuals holding false beliefs about COVID-19 are more likely to have negative perceptions of government transparency. They also tend to rely on their false beliefs when asked to justify why they think governments are not being transparent about the pandemic. Our findings suggest that the inability to successfully debunk misinformation could worsen perceptions of government transparency, further eroding political support and contributing to non-compliance with public health directives.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48392876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
On 3 September 2021, the Russian court forbade Google and Yandex to display search results for “Smart Voting,” the query referring to a tactical voting project by the jailed Russian opposition leader Alexei Navalny. To examine whether the two search engines complied with the court order, we collected top search outputs for the query from Google and Yandex. Our analysis demonstrates the lack of compliance from both engines; however, while Google continued prioritizing outputs related to the opposition’s web resources, Yandex removed links to them and, in some cases, promoted conspiratorial claims aligning with the Russian authorities’ anti-Western narrative.
{"title":"A story of (non)compliance, bias, and conspiracies: How Google and Yandex represented Smart Voting during the 2021 parliamentary elections in Russia","authors":"M. Makhortykh, Aleksandra Urman, M. Wijermars","doi":"10.37016/mr-2020-94","DOIUrl":"https://doi.org/10.37016/mr-2020-94","url":null,"abstract":"On 3 September 2021, the Russian court forbade Google and Yandex to display search results for “Smart Voting,” the query referring to a tactical voting project by the jailed Russian opposition leader Alexei Navalny. To examine whether the two search engines complied with the court order, we collected top search outputs for the query from Google and Yandex. Our analysis demonstrates the lack of compliance from both engines; however, while Google continued prioritizing outputs related to the opposition’s web resources, Yandex removed links to them and, in some cases, promoted conspiratorial claims aligning with the Russian authorities’ anti-Western narrative.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44893831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicklas Johansen, S. Marjanovic, Cathrine Valentin Kjaer, R. Baglini, Rebecca Adler-Nissen
We study how citizens engage with misinformation on Twitter in Denmark during the COVID-19 pandemic. We find that misinformation regarding facemasks is not corrected through counter-arguments or fact-checking. Instead, many tweets rejecting misinformation use humor to mock misinformation spreaders, whom they pejoratively label wearers of “tinfoil hats.” Tweets rejecting misinformation project a superior social position and leave the concerns of misinformation spreaders unaddressed. Our study highlights the role of status in people’s engagement with online misinformation.
{"title":"Ridiculing the “tinfoil hats:” Citizen responses to COVID-19 misinformation in the Danish facemask debate on Twitter","authors":"Nicklas Johansen, S. Marjanovic, Cathrine Valentin Kjaer, R. Baglini, Rebecca Adler-Nissen","doi":"10.37016/mr-2020-93","DOIUrl":"https://doi.org/10.37016/mr-2020-93","url":null,"abstract":"We study how citizens engage with misinformation on Twitter in Denmark during the COVID-19 pandemic. We find that misinformation regarding facemasks is not corrected through counter-arguments or fact-checking. Instead, many tweets rejecting misinformation use humor to mock misinformation spreaders, whom they pejoratively label wearers of “tinfoil hats.” Tweets rejecting misinformation project a superior social position and leave the concerns of misinformation spreaders unaddressed. Our study highlights the role of status in people’s engagement with online misinformation.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46993196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Allen, M. Mobius, David M. Rothschild, Duncan J. Watts
Addendum to HKS Misinformation Review “Research note: Examining potential bias in large-scale censored data” (https://doi.org/10.37016/mr-2020-74), published on July 26, 2021.
{"title":"Addendum to: Research note: Examining potential bias in large-scale censored data","authors":"Jennifer Allen, M. Mobius, David M. Rothschild, Duncan J. Watts","doi":"10.37016/mr-2020-89","DOIUrl":"https://doi.org/10.37016/mr-2020-89","url":null,"abstract":"Addendum to HKS Misinformation Review “Research note: Examining potential bias in large-scale censored data” (https://doi.org/10.37016/mr-2020-74), published on July 26, 2021.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49196202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashkan Kazemi, Kiran Garimella, Gautam Kishore Shahi, Devin Gaffney, Scott A. Hale
There is currently no easy way to discover potentially problematic content on WhatsApp and other end-to-end encrypted platforms at scale. In this paper, we analyze the usefulness of a crowd-sourced tipline through which users can submit content (“tips”) that they want fact-checked. We compared the tips sent to a WhatsApp tipline run during the 2019 Indian general election with the messages circulating in large, public groups on WhatsApp and other social media platforms during the same period. We found that tiplines are a very useful lens into WhatsApp conversations: a significant fraction of messages and images sent to the tipline match with the content being shared on public WhatsApp groups and other social media. Our analysis also shows that tiplines cover the most popular content well, and a majority of such content is often shared to the tipline before appearing in large, public WhatsApp groups. Overall, our findings suggest tiplines can be an effective source for discovering potentially misleading content.
{"title":"Research note: Tiplines to uncover misinformation on encrypted platforms: A case study of the 2019 Indian general election on WhatsApp","authors":"Ashkan Kazemi, Kiran Garimella, Gautam Kishore Shahi, Devin Gaffney, Scott A. Hale","doi":"10.37016/mr-2020-91","DOIUrl":"https://doi.org/10.37016/mr-2020-91","url":null,"abstract":"There is currently no easy way to discover potentially problematic content on WhatsApp and other end-to-end encrypted platforms at scale. In this paper, we analyze the usefulness of a crowd-sourced tipline through which users can submit content (“tips”) that they want fact-checked. We compared the tips sent to a WhatsApp tipline run during the 2019 Indian general election with the messages circulating in large, public groups on WhatsApp and other social media platforms during the same period. We found that tiplines are a very useful lens into WhatsApp conversations: a significant fraction of messages and images sent to the tipline match with the content being shared on public WhatsApp groups and other social media. Our analysis also shows that tiplines cover the most popular content well, and a majority of such content is often shared to the tipline before appearing in large, public WhatsApp groups. Overall, our findings suggest tiplines can be an effective source for discovering potentially misleading content.","PeriodicalId":93289,"journal":{"name":"Harvard Kennedy School misinformation review","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46309824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}