{"title":"机器人和计算宣传:通信和控制的自动化","authors":"S. Woolley","doi":"10.1017/9781108890960.006","DOIUrl":null,"url":null,"abstract":"Public awareness surrounding the threat of political bots, of international fears about armies of automated accounts taking over civic conversations on social media, reached a peak in the spring of 2017. OnMay 8 of that year, former Acting US Attorney General Sally Yates and former US Director of National Intelligence James R. Clapper Jr. sat before Congress to testify on what they called “the Russian toolbox” used in online efforts to manipulate the 2016 US election (Washington Post Staff 2017). In response to their testimony and a larger US intelligence community (IC) report on the subject Senator Sheldon Whitehouse said, “I went through the list [of tools used by the Russians] . . . it looked like propaganda, fake news, trolls, and bots. We can all agree from the IC report that those were in fact used in the 2016 election” (Washington Post Staff 2017). Yates and Clapper argued that the Russian government and its commercial proxy – the Internet Research Agency (IRA) – made substantive use of bots to spread disinformation and inflame polarization during the 2016 US presidential election. These comments mirrored concurrent allegations made by other public officials, but also by academic researchers and investigative journalists, around the globe. Eight months earlier, during a speech before her country’s parliament German Chancellor Angela Merkel raised concerns that bots would affect the outcome of their upcoming election (Copley 2016). Shortly thereafter, the New York Times described the rise of “a battle among political bots” on Twitter. Around the same time, research from the University of Southern California’s Information Sciences Institute concretized the ways that social media bots were being used to manipulate public opinion:","PeriodicalId":378598,"journal":{"name":"Social Media and Democracy","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Bots and Computational Propaganda: Automation for Communication and Control\",\"authors\":\"S. Woolley\",\"doi\":\"10.1017/9781108890960.006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Public awareness surrounding the threat of political bots, of international fears about armies of automated accounts taking over civic conversations on social media, reached a peak in the spring of 2017. OnMay 8 of that year, former Acting US Attorney General Sally Yates and former US Director of National Intelligence James R. Clapper Jr. sat before Congress to testify on what they called “the Russian toolbox” used in online efforts to manipulate the 2016 US election (Washington Post Staff 2017). In response to their testimony and a larger US intelligence community (IC) report on the subject Senator Sheldon Whitehouse said, “I went through the list [of tools used by the Russians] . . . it looked like propaganda, fake news, trolls, and bots. We can all agree from the IC report that those were in fact used in the 2016 election” (Washington Post Staff 2017). Yates and Clapper argued that the Russian government and its commercial proxy – the Internet Research Agency (IRA) – made substantive use of bots to spread disinformation and inflame polarization during the 2016 US presidential election. These comments mirrored concurrent allegations made by other public officials, but also by academic researchers and investigative journalists, around the globe. Eight months earlier, during a speech before her country’s parliament German Chancellor Angela Merkel raised concerns that bots would affect the outcome of their upcoming election (Copley 2016). Shortly thereafter, the New York Times described the rise of “a battle among political bots” on Twitter. Around the same time, research from the University of Southern California’s Information Sciences Institute concretized the ways that social media bots were being used to manipulate public opinion:\",\"PeriodicalId\":378598,\"journal\":{\"name\":\"Social Media and Democracy\",\"volume\":\"150 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social Media and Democracy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/9781108890960.006\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Media and Democracy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/9781108890960.006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
摘要
公众对政治机器人威胁的意识,以及国际社会对大量自动账户接管社交媒体上公民对话的担忧,在2017年春天达到了顶峰。当年5月8日,美国前代理司法部长萨利·耶茨和美国前国家情报总监詹姆斯·克拉珀在国会就他们所谓的“俄罗斯工具箱”在网络上操纵2016年美国大选作证。参议员谢尔登·怀特豪斯(Sheldon Whitehouse)在回应他们的证词和一份更大的美国情报界(IC)关于该主题的报告时说:“我仔细研究了(俄罗斯人使用的)工具清单……它看起来像是宣传、假新闻、喷子和机器人。我们都可以从IC的报告中同意,这些实际上是在2016年选举中使用的。”(华盛顿邮报职员2017)耶茨和克拉珀认为,俄罗斯政府及其商业代理——互联网研究机构(IRA)——在2016年美国总统大选期间大量利用机器人传播虚假信息,煽动两极分化。这些言论反映了全球其他政府官员以及学术研究人员和调查记者同时提出的指控。八个月前,德国总理安格拉·默克尔在她的国家议会发表演讲时表示担心机器人会影响即将到来的选举结果(Copley 2016)。此后不久,《纽约时报》在推特上描述了“政治机器人之间的战斗”的兴起。大约在同一时间,南加州大学信息科学研究所(University of Southern California’s Information Sciences Institute)的研究具体化了社交媒体机器人被用来操纵公众舆论的方式:
Bots and Computational Propaganda: Automation for Communication and Control
Public awareness surrounding the threat of political bots, of international fears about armies of automated accounts taking over civic conversations on social media, reached a peak in the spring of 2017. OnMay 8 of that year, former Acting US Attorney General Sally Yates and former US Director of National Intelligence James R. Clapper Jr. sat before Congress to testify on what they called “the Russian toolbox” used in online efforts to manipulate the 2016 US election (Washington Post Staff 2017). In response to their testimony and a larger US intelligence community (IC) report on the subject Senator Sheldon Whitehouse said, “I went through the list [of tools used by the Russians] . . . it looked like propaganda, fake news, trolls, and bots. We can all agree from the IC report that those were in fact used in the 2016 election” (Washington Post Staff 2017). Yates and Clapper argued that the Russian government and its commercial proxy – the Internet Research Agency (IRA) – made substantive use of bots to spread disinformation and inflame polarization during the 2016 US presidential election. These comments mirrored concurrent allegations made by other public officials, but also by academic researchers and investigative journalists, around the globe. Eight months earlier, during a speech before her country’s parliament German Chancellor Angela Merkel raised concerns that bots would affect the outcome of their upcoming election (Copley 2016). Shortly thereafter, the New York Times described the rise of “a battle among political bots” on Twitter. Around the same time, research from the University of Southern California’s Information Sciences Institute concretized the ways that social media bots were being used to manipulate public opinion: