首页 > 最新文献

Policy and Internet最新文献

英文 中文
Effects of online citizen participation on legitimacy beliefs in local government. Evidence from a comparative study of online participation platforms in three German municipalities 网络公民参与对地方政府合法性信念的影响。来自德国三个城市在线参与平台比较研究的证据
1区 文学 Q1 COMMUNICATION Pub Date : 2023-11-09 DOI: 10.1002/poi3.371
Tobias Escher, Bastian Rottinghaus
Abstract In order to generate legitimacy for policies and political institutions, governments regularly involve citizens in the decision‐making process, increasingly so via the Internet. This research investigates if online participation does indeed impact positively on legitimacy beliefs of those citizens engaging with the process, and which particular aspects of the participation process, the individual participants, and the local context contribute to these changes. Our surveys of participants in almost identical online consultations in three German municipalities show that the participation process and its expected results have a sizeable effect on satisfaction with local political authorities and local regime performance. While most participants report at least slightly more positive perceptions that are mainly output‐oriented, for some engagement with the process leads not to more, but in fact to less legitimacy. We find this to be the case both for those participants who remain silent and for those who participate intensively. Our results also confirm the important role of existing individual resources and context‐related attitudes such as trust in and satisfaction with local (not national) politics. Finally, our analysis shows that online participation is able to enable constructive discussion, deliver useful results, and attract people who would not have participated offline to engage.
为了使政策和政治制度具有合法性,政府经常让公民参与决策过程,并越来越多地通过互联网进行决策。本研究调查在线参与是否确实对参与过程的公民的合法性信念产生积极影响,以及参与过程的哪些特定方面,个人参与者和当地环境促成了这些变化。我们对三个德国城市几乎相同的在线咨询参与者的调查表明,参与过程及其预期结果对当地政治当局和地方政权表现的满意度有相当大的影响。虽然大多数参与者报告至少有一些积极的看法,主要是产出导向的,但参与这个过程的一些人并没有带来更多的合法性,而实际上是更少的合法性。我们发现,对于那些保持沉默的参与者和那些密集参与的参与者来说,情况都是如此。我们的研究结果也证实了现有的个人资源和与环境相关的态度,如对地方(而不是国家)政治的信任和满意度的重要作用。最后,我们的分析表明,在线参与能够促成建设性的讨论,提供有用的结果,并吸引那些不会在线下参与的人参与进来。
{"title":"Effects of online citizen participation on legitimacy beliefs in local government. Evidence from a comparative study of online participation platforms in three German municipalities","authors":"Tobias Escher, Bastian Rottinghaus","doi":"10.1002/poi3.371","DOIUrl":"https://doi.org/10.1002/poi3.371","url":null,"abstract":"Abstract In order to generate legitimacy for policies and political institutions, governments regularly involve citizens in the decision‐making process, increasingly so via the Internet. This research investigates if online participation does indeed impact positively on legitimacy beliefs of those citizens engaging with the process, and which particular aspects of the participation process, the individual participants, and the local context contribute to these changes. Our surveys of participants in almost identical online consultations in three German municipalities show that the participation process and its expected results have a sizeable effect on satisfaction with local political authorities and local regime performance. While most participants report at least slightly more positive perceptions that are mainly output‐oriented, for some engagement with the process leads not to more, but in fact to less legitimacy. We find this to be the case both for those participants who remain silent and for those who participate intensively. Our results also confirm the important role of existing individual resources and context‐related attitudes such as trust in and satisfaction with local (not national) politics. Finally, our analysis shows that online participation is able to enable constructive discussion, deliver useful results, and attract people who would not have participated offline to engage.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135291567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Highly nuanced policy is very difficult to apply at scale”: Examining researcher account and content takedowns online “高度细致入微的政策很难大规模实施”:检查研究人员的账户和内容在网上被删除
1区 文学 Q1 COMMUNICATION Pub Date : 2023-11-06 DOI: 10.1002/poi3.374
Aaron Y. Zelin
Abstract Since 2019, researchers examining, archiving, and collecting extremist and terrorist materials online have increasingly been taken offline. In part a consequence of the automation of content moderation by different technology companies and national governments calling for ever quicker takedowns. Based on an online survey of peers in the field, this research highlights that up to 60% of researchers surveyed have had either their accounts or content they have posted or stored online taken down from varying platforms. Beyond the quantitative data, this research also garnered qualitative answers about concerns individuals in the field had related to this problem set, namely, the lack of transparency on the part of the technology companies, hindering actual research and understanding of complicated and evolving issues related to different extremist and terrorist phenomena, undermining potential collaboration within the research field, and the potential of self‐censorship online. An easy solution to this would be a whitelist, though there are inherent downsides related to this as well, especially between researchers at different levels in their careers, institutional affiliation or lack thereof, and inequalities between researchers from the West versus Global South. Either way, securitizing research in however form it evolves in the future will fundamentally hurt research.
自2019年以来,越来越多的研究人员在网上审查、存档和收集极端主义和恐怖主义材料。部分原因是不同的科技公司和国家政府要求更快地删除内容,从而实现了内容审核的自动化。根据对该领域同行的在线调查,这项研究强调,多达60%的受访研究人员的账户或他们在网上发布或存储的内容从不同的平台上被删除。除了定量数据之外,这项研究还获得了关于该领域个人与该问题集相关的问题的定性答案,即技术公司缺乏透明度,阻碍了对与不同极端主义和恐怖主义现象相关的复杂和不断发展的问题的实际研究和理解,破坏了研究领域内潜在的合作,以及在线自我审查的可能性。一个简单的解决方案是白名单,尽管这也有固有的缺点,特别是在不同职业水平的研究人员之间,机构隶属或缺乏,以及来自西方与全球南方的研究人员之间的不平等。无论哪种方式,无论未来以何种形式将研究证券化,都将从根本上损害研究。
{"title":"“Highly nuanced policy is very difficult to apply at scale”: Examining researcher account and content takedowns online","authors":"Aaron Y. Zelin","doi":"10.1002/poi3.374","DOIUrl":"https://doi.org/10.1002/poi3.374","url":null,"abstract":"Abstract Since 2019, researchers examining, archiving, and collecting extremist and terrorist materials online have increasingly been taken offline. In part a consequence of the automation of content moderation by different technology companies and national governments calling for ever quicker takedowns. Based on an online survey of peers in the field, this research highlights that up to 60% of researchers surveyed have had either their accounts or content they have posted or stored online taken down from varying platforms. Beyond the quantitative data, this research also garnered qualitative answers about concerns individuals in the field had related to this problem set, namely, the lack of transparency on the part of the technology companies, hindering actual research and understanding of complicated and evolving issues related to different extremist and terrorist phenomena, undermining potential collaboration within the research field, and the potential of self‐censorship online. An easy solution to this would be a whitelist, though there are inherent downsides related to this as well, especially between researchers at different levels in their careers, institutional affiliation or lack thereof, and inequalities between researchers from the West versus Global South. Either way, securitizing research in however form it evolves in the future will fundamentally hurt research.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135634020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special issue: The (international) politics of content takedowns: Theory, practice, ethics 特刊:内容下架的(国际)政治:理论、实践、伦理
1区 文学 Q1 COMMUNICATION Pub Date : 2023-11-06 DOI: 10.1002/poi3.375
James Fitzgerald, Ayse D. Lokmanoglu
Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 201
(2020)使用Singh和Bankston(2018)的广泛类型4对内容审核进行定义,将内容删除定义为:平台或其他信息中介根据法律或政策要求删除“有问题的内容”,这些内容发生在不同类别中,包括但不限于:政府和法律内容需求;版权请求;商标的请求;网络关闭和服务中断;“被遗忘权”删除请求和;基于社区指南的移除。在建立了基本参数之后,我们现在开始对内容审核和内容删除的一些最重要的政治维度进行评论,然后再对每篇论文进行简要的个人总结。2010年,塔尔顿·吉莱斯皮(Tarterton Gillespie)将矛头对准了社交媒体公司的集体主张,即它们提供了中立的交流网站,而辩论和政治只是在这些网站上发生的。通过逐渐用“平台”取代“公司”和“服务”等自我描述的术语,这些实体投射出一种中立的形象——“平台”是“一个‘凸起的平面’,旨在促进随后发生的一些活动”(Gillespie, 2010, p. 350)。当然,纯粹的中立是一种幻觉,这些企业早期表达的政治同情倾向于寻求自由市场资本主义的保证之手,理想情况下不受政府监管的约束(见Fuchs, 2013)。例如,谷歌和YouTube将自己定位为“言论自由的捍卫者”(Gillespie, 2010年,第356页),作为对美国参议员乔·利伯曼要求删除YouTube上圣战内容的回应,该平台以“鼓励言论自由,捍卫每个人表达不受欢迎观点的权利……允许我们的用户观看所有可接受的内容,并做出自己的决定”来证明其部分实现(YouTube Team, 2008年,同上,第356页)。从2023年的有利位置来看,这些中立的主张似乎充其量是被误导了,或者至少是天真地与主要社交媒体平台将在多大程度上占据监管权力的位置,与国家的传统角色竞争,甚至在某些情况下篡夺(见Klonick, 2017)。社交媒体公司对内容审核政策的应用赋予了它们巨大的权力;然而,通过本质上“建立在平台上审查或推广哪些信息和行为的规范”(González-Bailón & Lelkes, 2023,第162页),这些精心策划的立场不仅从一个高层次的表面渗透到社会上,而且辩证地嵌入并折射了社会。在这种程度上,早期学者区分“离线”和“在线”本体读起来就像是写在一个不同的世界;今天,有一个更强烈的共识,即社交媒体平台的行为、政策和身份塑造了无数的现实和可能性的视野,无论是保护还是削弱民主护栏(Campos Mello, 2020),塑造青少年的社交模式(Bucknell Bossen & Kottasz, 2020),还是适应ADHD自我诊断的显着上升(Yeung等人,2022)。因此,对内容的系统性审核,远不止是对国家或管理机构期望(或法律要求)的监管让步,而是社交媒体平台在程序上生成自身身份的一种手段,而这种身份正是社交媒体平台允许在其领域内蓬勃发展的(政治)文化所反映的。简单地说,内容审核塑造了我们生活的世界。毫无疑问,社交媒体公司认识到内容审核的力量是其身份/品牌的一个支点,尽管长期以来在实践中如何以及为什么做出审核决策方面缺乏透明度(见Gorwa & Ash, 2020;鲁尼,2023)。事实上,有人可能会说,这种动态是Meta在2023年推出Threads的核心:一个主要的新型社交媒体平台,试图建立在庞大的、已有的用户基础上。表面上看,Threads是Twitter的克隆版,是为了回应埃隆·马斯克(Elon Musk)对该平台的收购而创建的。Meta首席执行官马克·扎克伯格将Threads定位为一个“友好的地方”,他最初的帖子明确表示,一种温和的“善良”文化将是Threads“成功的关键”(Chan, 2023),而通过有效地将内容审核外包给用户(Nix, 2023), Meta全球事务总裁尼克·克莱格(Nick Clegg)表示,人们在Threads上看到的东西“对你来说是有意义的”。在马斯克的twitter上——在2023年7月23日更名为“X”——用户获得了一种不同的自由。 当代世界政治的破裂——包括威权民粹主义的持续煽动(参见Schäfer, 2022)——表明,通往这一理想的道路至少会充满阻力,带来极端分子的坚定承诺,抵制可能损害其政治项目的全面变革(McNeil-Wilson & Flonk, 2023),更不用说他们的商业利益了(参见Caplan & Gillespie, 2020)。国家、国家内部或国家以外的监管可能会提供最直接的有意义的变革承诺,但我们必须警惕“关于监管效率的神话般的主张”(Mansell, 2023,第145页),并缓和对自上而下的监管可以单独实现的期望,无论这些举措多么雄心勃勃的或值得称赞。从下层来看,内容审核实践(包括内容删除)被认为会滋养(如果不是激发)政治抵抗,潜在地激发全球公民社会——尽管往往是意想不到的后果(见Alimardani & Elswah, 2021)。印度、印度尼西亚和巴基斯坦的内容创作者(Zeng & Kaye, 2022)、内容版主(Roberts, 2019)和更普遍的边缘化群体(Jackson, 2023)之间争夺知名度的斗争表明,“离线”和“在线”边缘化力量在一个共享的本体中相互融合和复制。因此,(在线)对节制和撤下措施的抵制产生了(离线)身份重构的潜力,以及随之而来的行动、言论和构成新政治身份和集体行动的空间扩张(见West, 2017)。如果这种动态存在,那么它也适用于不为上述社会进步愿景而斗争的集体行动者的分类。正如菲茨杰拉德和杰兰德(2023)以及马修斯和金登(2023)指出的那样,内容删除和其他适度的做法——远远不能消除极右翼演员的极端主义身份——可以为他们的集体能力提供一个福音,让他们(自我)表现为对审查制度压迫力量的正义抵抗者,同时也“游戏”了内容审查和删除的规范,以确保他们希望向同情的追随者推送的内容最终找到了一条路。激进分子社区“对机器愤怒”(West, 2017)并努力将自己从国家/社交媒体控制的枷锁中解放出来的可能性确实是一种强大的、潜在的变革力量,除了自上而下的措施外,它肯定会影响内容审核的未来如何继续塑造国际政治。具体而言,内容删除对这些过程的影响程度值得进一步调查,并构成本特刊的中心主题之一。最后,尽管我们在开篇的陈述中对现代的“适度”进行了大量的阐述,但我们必须停下来思考一下,内容删除带来的困境和可能性本质上是新的。正如Zhang(2023)所认为的那样,关于言论如何被允许(或拒绝)的(政治)监管,反映了民主控制的制度文化和治理文化之间长期存在的哲学冲突——内容删除只是反映了它的最新前沿。Santini等人(2023)表明,尽管社交媒体在传播政治错误信息方面处于领先地位,但我们不能忽视,在巴西的情况下,问题内容的不删除确保了它被该国更强大的广播媒体放大。最后,沃特金(2023)通过内容删除看到了一种最基本的权力动态,即剥削劳工的做法。她关注内容审核员通过筛选和删除恐怖主义媒体而造成的精神伤害,她认为保护他们的蓝图已经存在:它只需要重新配置以适应现代环境。最后,关于内容审核的真实性,有很多值得思考的地方,以反映或改变定义我们破碎的政治景观和在其空间中
{"title":"Special issue: The (international) politics of content takedowns: Theory, practice, ethics","authors":"James Fitzgerald, Ayse D. Lokmanoglu","doi":"10.1002/poi3.375","DOIUrl":"https://doi.org/10.1002/poi3.375","url":null,"abstract":"Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 201","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135636688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Countering online terrorist content: A social regulation approach 打击网络恐怖主义内容:一种社会监管方法
1区 文学 Q1 COMMUNICATION Pub Date : 2023-10-30 DOI: 10.1002/poi3.373
Amy‐Louise Watkin
Abstract After a period of self‐regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised for a variety of reasons, most prominently for concerns of infringing free speech and creating unfair burdens for smaller platforms. In addition to this, regulation is heavily centred around content moderation, however, fails to consider or address the psychosocial risks it poses to human content moderators. This paper argues that where regulation has been heavily criticised yet continues to inspire similar regulation a new regulatory approach is required. The aim of this paper is to undertake an introductory examination of the use of a social regulation approach in three other industries (environmental protection, consumer protection and occupational health and safety) to learn and investigate new regulatory avenues that could be applied to the development of new regulation that seeks to counter terrorist content on tech platforms and is concerned with the safety of content moderators.
经过一段时间的自我监管,世界各国开始实施从科技平台上删除恐怖主义内容的规定。然而,由于各种原因,这一规定在很大程度上受到了批评,最主要的原因是担心侵犯言论自由,并给较小的平台带来不公平的负担。除此之外,监管主要以内容审核为中心,然而,未能考虑或解决它对人类内容审核者构成的心理社会风险。本文认为,在监管受到严厉批评但仍继续激发类似监管的地方,需要一种新的监管方法。本文的目的是对在其他三个行业(环境保护、消费者保护和职业健康与安全)中使用社会监管方法进行介绍性检查,以学习和调查可以应用于制定新监管的新监管途径,这些新监管旨在打击技术平台上的恐怖主义内容,并关注内容版主的安全。
{"title":"Countering online terrorist content: A social regulation approach","authors":"Amy‐Louise Watkin","doi":"10.1002/poi3.373","DOIUrl":"https://doi.org/10.1002/poi3.373","url":null,"abstract":"Abstract After a period of self‐regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised for a variety of reasons, most prominently for concerns of infringing free speech and creating unfair burdens for smaller platforms. In addition to this, regulation is heavily centred around content moderation, however, fails to consider or address the psychosocial risks it poses to human content moderators. This paper argues that where regulation has been heavily criticised yet continues to inspire similar regulation a new regulatory approach is required. The aim of this paper is to undertake an introductory examination of the use of a social regulation approach in three other industries (environmental protection, consumer protection and occupational health and safety) to learn and investigate new regulatory avenues that could be applied to the development of new regulation that seeks to counter terrorist content on tech platforms and is concerned with the safety of content moderators.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"65 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136069295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content takedowns and activist organizing: Impact of social media content moderation on activists and organizing 内容下架和活动家组织:社交媒体内容审核对活动家和组织的影响
1区 文学 Q1 COMMUNICATION Pub Date : 2023-10-26 DOI: 10.1002/poi3.372
Diane Jackson
Social media companies are increasingly transcending the offline sphere by shaping online discourse that has direct effects on offline outcomes. Recent polls have shown that as many as 70% of young people in the United States have used social media for information about political elections (Booth et al., 2020) and almost 30% of US adults have used social media to post about political and social issues (McClain, 2021). Further, social media have become a site of organizing with over half of US adults reporting having used social media as a tool for gathering or sharing information about political or social issues (Anderson et al., 2018). Despite the necessity of removing content that may breach the content guidelines set forth by social media companies such as Facebook, Instagram, Twitter, and TikTok, a gap persists between the content that violates guidelines and the content that is removed from social media sites. For activists particularly, content suppression is not only a matter of censorship at the individual level. During a time of significant mobilization, activists rely on their social media platforms perhaps more than ever before. This has been demonstrated by the Facebook Arabic page, “We Are All Khaled Said,” which has been credited with promoting the 2011 Egyptian Revolution (Alaimo, 2015). Activists posting about the Mahsa Amini protests and Ukrainians posting about the Russian invasion of Ukraine have reported similar experiences with recent Meta policy changes that have led to mass takedowns of protest footage and related content (Alimardani, 2022). The impacts of social media platforms' policy and practices for moderation are growing as cyberactivism has become more integral in social organizing (Cammaerts, 2015). However, due to accuracy and bias issues of content moderation algorithms deployed on these platforms (Binns et al., 2017; Buolamwini & Gebru, 2018; Rauchberg, 2022), engaging social media as a tool for social and political organizing is becoming more challenging. The intricacies and the downstream systemic effects of these content moderation techniques are not explicitly accounted for by social media platforms. Therefore, content moderation is pertinent to social media users based on the effects that moderation guidelines have not only on online behavior but also on offline behavior. The objectives of this paper are twofold. First and foremost, the goal of this paper is to contribute to the academic discourse raising awareness about how individuals are being silenced by content moderation algorithms on social media. This paper does this primarily by exploring the social and political implications of social media content moderation by framing them through the lens of activism and activist efforts online and offline. The secondary goal of this paper is to make a case for social media companies to develop features for individuals who are wrongfully marginalized on their platforms to be notified about and to appeal incidenc
此外,TikTok承认其反欺凌审核系统存在偏见,该系统压制了那些似乎有残疾、面部毁容或其他可能使他们容易受到欺凌的特征的个人的内容(博特拉,2019;Köver &路透社,2019;Rauchberg, 2022)。因此,这款应用的短暂历史中充斥着算法压制某些群体的证据。其他泄露的有关该公司的信息揭示了抖音母公司字节跳动(ByteDance)用来缓和、压制或审查违背中国外交政策目标的内容的策略(Hern, 2019)。这一证据表明,有关天安门广场和西藏独立的内容有所节制(Hern, 2019),并证实了对该公司审查有关香港抗议活动内容的指控(Harwell & Romm, 2019)。另一篇关于影子禁止边缘化个人的研究文章强调了更脆弱的社区是如何通过Instagram在线上和离线受到这些节制技术的不成比例的攻击的(Middlebrook, 2020)。这篇文章考虑了由于用户通常缺乏对审核算法的认识,边缘化社区面临被影子封禁的影响。不幸的是,缺乏指导方针让用户知道他们何时或为什么被禁止使用影子,也缺乏撤销影子禁令的上诉程序,这使得这种惩罚特别不公正。审查和监视对依赖社交媒体组织的边缘化社区和活动家的用户和活动家(Asher-Schapiro, 2022)产生不利影响(Alimardani, 2022;Sandoval-Almazan & Ramon Gil-Garcia, 2014)。对于边缘化社区的活动人士和个人来说,这种网络压迫可以成为他们组织努力和信息共享成功的关键因素。社交媒体公司对这些做法缺乏专一性,社交媒体内容审核算法在审核边缘化个人的内容时缺乏准确性,这对用户的言论自由产生了重大影响。审核算法缺乏透明度,以及最近社交媒体网站的变化(例如,埃隆·马斯克(Elon Musk)收购Twitter;Asher-Schapiro, 2022)引发了更多关于社交媒体网站如何调节对话(见Gerrard, 2018)以及谁实际上正在被调节(Middlebrook, 2020)的讨论。这些审核算法的系统性不足被大多数网站的常见做法所加剧,这些网站依赖用户报告内容为冒犯性或不适当,而不是通过其他方法检测内容(Chen et al., 2012)。依赖和委托用户报告内容成为压制边缘化、遵循指导方针的内容的更大问题。用户可以利用他们的特权来报告内容,无论内容是否遵守网站的社区准则。关于党派动机行为的现有研究,如分发假新闻文章(Osmundsen等人,2021),以及当新闻不符合党派成员的现有信念时将其标记为假新闻(Kahne & Bowyer, 2017),为社交媒体用户可能会使用功能来报告内容以删除与他们的态度相矛盾的内容的可能性提供了支持。因此,活动人士可能会面临双重困境,即现有的自动内容审核系统可能不公平地、不准确地自动删除他们的内容,而不同意他们的个人可能会向这些不公平的自动系统标记他们的内容,以便删除。这种双重束缚使得使用社交媒体的活动家在分享他们的信息时处于不利地位,尽管社交媒体对参与激进主义越来越必要(Cammaerts, 2015)。本节将讨论最近两个重要的社会运动的例子,这些运动以独特而重要的方式使用社交媒体,以及内容删除对运动产生或可能产生的影响。在2012年枪杀特雷沃恩·马丁的乔治·齐默尔曼被判无罪后,BLM运动起源于社交媒体上的标签#黑人的生命也很重要#。2014年,密苏里州的迈克尔·布朗和纽约的埃里克·加纳在警察暴力事件中丧生后,这一标签和运动在美国获得了巨大的动力(霍华德大学法学院,2023)。在2020年明尼苏达州警察谋杀乔治·弗洛伊德(George Floyd)之后,这个标签和运动在全球蔓延开来(霍华德大学法学院,2023年;Tilly等人,2019)。在一项关于美国大学运动员行动主义的研究中,大学运动员活动家认为社交媒体是行动主义发生的主要方式之一,通过社交媒体,活动家可以接触、放大和参与来自其他运动成员的信息(Feder et al., 2023)。 活动人士被迫在社交媒体网站之间迅速转换,以对抗伊朗政府压制该运动信息的努力(Amidi, 2022;伊朗国际,2022年;库马尔,2022)。由于伊朗政府以暴力镇压运动及其参与者,审查社交媒体帖子和互联网中断(CNN特别报道,2023;法新社,2022),伊朗抗议者不得不依靠国外的个人以及加密和异地服务器等创新,在社交媒体上分享抗议信息,并保持最新状态(Amidi, 2022;屠夫,2022;库马尔,2022)。在政府审查和对抗议者的暴力中(Amidi, 2022;CNN特别报道,2023),该运动及其成员通过在社交媒体平台之间快速切换,并通过在线和离线存在放大其信息,展示了弹性(Kumar, 2022)。因此,社交媒体在运动的持续影响中发挥了关键作用(Amidi, 2022;库马尔,2022)。具体来说,Instagram已经成为伊朗最受欢迎的社交媒体平台之一,它是唯一未经审查的外国社交媒体形式(Alimardani, 2022;达格瑞斯,2021)。然而,为了应对俄罗斯入侵乌克兰,Meta在2022年做出了一项政策调整,导致大量与伊朗抗议相关的内容被删除,这些内容包含了一个常见的伊朗抗议口号(Alimardani, 2022)。这个口号,翻译成英语是呼吁军队、现任最高领导人和现任总统去死,在文化上作为反对伊朗独裁政权的象征性异议呼吁而存在,而不是真正的暴力呼吁(Alimardani, 2022)。Meta过去曾将这些口号纳入其社区指导方针的例外中,但Meta做出的一项新政策改变,使这些例外既适用于伊朗的这些口号,也适用于俄罗斯入侵期间在乌克兰使用的其他抗议口号(Alimardani, 2022;比德尔,2022;Vengattil & Culliford, 2022)。因此,Meta被批评优先尊重独裁者而不是抗议者(Höppner, 2022),而他们在人权和言论自由方面的政策更符合美国的政策(Biddle, 2022)。对伊朗抗议者来说,政府的能力加上Meta等社交媒体公司在审核他们的内容时做出的政策决定,对组织和动员构成了独特的综合挑战。在Meta的政策改变之后,这种大规模系统性地从个人账户和重要的国家媒体上删除帖子的行为毫无预警(Alimardani, 2022),加剧了这些抗议者被迫监控自己的帖子和制定技术使用策略的程度。此外,伊朗政府黑客攻击活动人士账户或控制被其拘留的抗议者账户的证据(Polglase & Mezzofiore, 2022)对活动人士及其运动构成了重大威胁。事实上,社交媒体平台的节制政策和做法对活动人士及其运动有着重要的影响。这项Meta内容审核政策的改变,产生了与伊朗政府实行的审查制度类似的结果。因此,这种情况说明,如果社交媒体平台的目标真的涉及在其网站上促进言论自由,就有必要在其审核政策和实践中识别并建立文化上细微差别的理解。除了担心算法节制系统会延续离线持续存在的系统性政治不公正之外,这些做法还带来了许多道德问题和规模问题(Gorwa et al., 2020;Marcondes et al., 2021)。因此,以下部分将审查政治
{"title":"Content takedowns and activist organizing: Impact of social media content moderation on activists and organizing","authors":"Diane Jackson","doi":"10.1002/poi3.372","DOIUrl":"https://doi.org/10.1002/poi3.372","url":null,"abstract":"Social media companies are increasingly transcending the offline sphere by shaping online discourse that has direct effects on offline outcomes. Recent polls have shown that as many as 70% of young people in the United States have used social media for information about political elections (Booth et al., 2020) and almost 30% of US adults have used social media to post about political and social issues (McClain, 2021). Further, social media have become a site of organizing with over half of US adults reporting having used social media as a tool for gathering or sharing information about political or social issues (Anderson et al., 2018). Despite the necessity of removing content that may breach the content guidelines set forth by social media companies such as Facebook, Instagram, Twitter, and TikTok, a gap persists between the content that violates guidelines and the content that is removed from social media sites. For activists particularly, content suppression is not only a matter of censorship at the individual level. During a time of significant mobilization, activists rely on their social media platforms perhaps more than ever before. This has been demonstrated by the Facebook Arabic page, “We Are All Khaled Said,” which has been credited with promoting the 2011 Egyptian Revolution (Alaimo, 2015). Activists posting about the Mahsa Amini protests and Ukrainians posting about the Russian invasion of Ukraine have reported similar experiences with recent Meta policy changes that have led to mass takedowns of protest footage and related content (Alimardani, 2022). The impacts of social media platforms' policy and practices for moderation are growing as cyberactivism has become more integral in social organizing (Cammaerts, 2015). However, due to accuracy and bias issues of content moderation algorithms deployed on these platforms (Binns et al., 2017; Buolamwini & Gebru, 2018; Rauchberg, 2022), engaging social media as a tool for social and political organizing is becoming more challenging. The intricacies and the downstream systemic effects of these content moderation techniques are not explicitly accounted for by social media platforms. Therefore, content moderation is pertinent to social media users based on the effects that moderation guidelines have not only on online behavior but also on offline behavior. The objectives of this paper are twofold. First and foremost, the goal of this paper is to contribute to the academic discourse raising awareness about how individuals are being silenced by content moderation algorithms on social media. This paper does this primarily by exploring the social and political implications of social media content moderation by framing them through the lens of activism and activist efforts online and offline. The secondary goal of this paper is to make a case for social media companies to develop features for individuals who are wrongfully marginalized on their platforms to be notified about and to appeal incidenc","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"3 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136381285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdfunding platforms as conduits for ideological struggle and extremism: On the need for greater regulation and digital constitutionalism 众筹平台是意识形态斗争和极端主义的渠道:论加强监管和数字宪政的必要性
1区 文学 Q1 COMMUNICATION Pub Date : 2023-09-24 DOI: 10.1002/poi3.369
Matthew Wade, Stephanie A. Baker, Michael J. Walsh
Abstract Crowdfunding platforms remain understudied as conduits for ideological struggle. While other social media platforms may enable the expression of hateful and harmful ideas, crowdfunding can actively facilitate their enaction through financial support. In addressing such risks, crowdfunding platforms attempt to mitigate complicity but retain legitimacy . That is, ensuring their fundraising tools are not exploited for intolerant, violent or hate‐based purposes, yet simultaneously avoiding restrictive policies that undermine their legitimacy as ‘open’ platforms. Although social media platforms are routinely scrutinized for enabling misinformation, hateful rhetoric and extremism, crowdfunding has largely escaped critical inquiry, despite being repeatedly implicated in amplifying such threats. Drawing on the ‘Freedom Convoy’ movement as a case study, this article employs critical discourse analysis to trace how crowdfunding platforms reveal their underlying values in privileging either collective safety or personal liberty when hosting divisive causes. The radically different policy decisions adopted by crowdfunding platforms GoFundMe and GiveSendGo expose a concerning divide between ‘Big Tech’ and ‘Alt‐Tech’ platforms regarding what harms they are willing to risk, and the ideological rationales through which these determinations are made. There remain relatively few regulatory safeguards guiding such impactful strategic choices, leaving crowdfunding platforms susceptible to weaponization. With Alt‐Tech platforms aspiring to build an ‘alternative internet’, this paper highlights the urgent need to explore digital constitutionalism in the crowdfunding space, establishing firmer boundaries to better mitigate fundraising platforms becoming complicit in catastrophic harms.
众筹平台作为意识形态斗争的渠道仍未得到充分研究。虽然其他社交媒体平台可能会表达仇恨和有害的想法,但众筹可以通过财政支持积极促进它们的实施。为了解决这些风险,众筹平台试图减少共谋,但保留合法性。也就是说,确保他们的筹款工具不被用于不宽容、暴力或基于仇恨的目的,同时避免限制性政策破坏他们作为“开放”平台的合法性。尽管社交媒体平台经常因助长虚假信息、仇恨言论和极端主义而受到审查,但众筹在很大程度上逃脱了严厉的调查,尽管它一再被指放大了这些威胁。本文以“自由车队”(Freedom Convoy)运动为例,运用批判性话语分析来追踪众筹平台在主持分裂事业时,是如何揭示其对集体安全和个人自由的特权的潜在价值的。众筹平台GoFundMe和GiveSendGo所采取的截然不同的政策决定,暴露了“大科技”和“另类科技”平台之间的分歧,即他们愿意冒什么样的风险,以及做出这些决定的意识形态依据。指导这种有影响力的战略选择的监管保障相对较少,这使得众筹平台容易被武器化。随着Alt - Tech平台渴望建立一个“替代互联网”,本文强调了在众筹领域探索数字宪政的迫切需要,建立更严格的界限,以更好地减轻筹款平台成为灾难性危害的同谋。
{"title":"Crowdfunding platforms as conduits for ideological struggle and extremism: On the need for greater regulation and digital constitutionalism","authors":"Matthew Wade, Stephanie A. Baker, Michael J. Walsh","doi":"10.1002/poi3.369","DOIUrl":"https://doi.org/10.1002/poi3.369","url":null,"abstract":"Abstract Crowdfunding platforms remain understudied as conduits for ideological struggle. While other social media platforms may enable the expression of hateful and harmful ideas, crowdfunding can actively facilitate their enaction through financial support. In addressing such risks, crowdfunding platforms attempt to mitigate complicity but retain legitimacy . That is, ensuring their fundraising tools are not exploited for intolerant, violent or hate‐based purposes, yet simultaneously avoiding restrictive policies that undermine their legitimacy as ‘open’ platforms. Although social media platforms are routinely scrutinized for enabling misinformation, hateful rhetoric and extremism, crowdfunding has largely escaped critical inquiry, despite being repeatedly implicated in amplifying such threats. Drawing on the ‘Freedom Convoy’ movement as a case study, this article employs critical discourse analysis to trace how crowdfunding platforms reveal their underlying values in privileging either collective safety or personal liberty when hosting divisive causes. The radically different policy decisions adopted by crowdfunding platforms GoFundMe and GiveSendGo expose a concerning divide between ‘Big Tech’ and ‘Alt‐Tech’ platforms regarding what harms they are willing to risk, and the ideological rationales through which these determinations are made. There remain relatively few regulatory safeguards guiding such impactful strategic choices, leaving crowdfunding platforms susceptible to weaponization. With Alt‐Tech platforms aspiring to build an ‘alternative internet’, this paper highlights the urgent need to explore digital constitutionalism in the crowdfunding space, establishing firmer boundaries to better mitigate fundraising platforms becoming complicit in catastrophic harms.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135925578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Website blocking in the European Union: Network interference from the perspective of Open Internet 欧盟的网站封锁:开放互联网视角下的网络干扰
1区 文学 Q1 COMMUNICATION Pub Date : 2023-09-19 DOI: 10.1002/poi3.367
Vasilis Ververis, Lucas Lasota, Tatiana Ermakova, Benjamin Fabian
Abstract By establishing an infrastructure for monitoring and blocking networks in accordance with European Union (EU) law on preventive measures against the spread of information, EU member states have also made it easier to block websites and services and monitor information. While relevant studies have documented Internet censorship in non‐European countries, as well as the use of such infrastructures for political reasons, this study examines network interference practices such as website blocking against the backdrop of an almost complete lack of EU‐related research. Specifically, it performs and demonstrates an analysis for the total of 27 EU countries based on three different sources. They include first, tens of millions of historical network measurements collected in 2020 by Open Observatory of Network Interference volunteers from around the world; second, the publicly available blocking lists used by EU member states; and third, the reports issued by network regulators in each country from May 2020 to April 2021. Our results show that authorities issue multiple types of blocklists. Internet Service Providers limit access to different types and categories of websites and services. Such resources are sometimes blocked for unknown reasons and not included in any of the publicly available blocklists. The study concludes with the hurdles related to network measurements and the nontransparency from regulators regarding specifying website addresses in blocking activities.
根据欧盟(EU)关于防止信息传播的预防措施的法律,欧盟成员国通过建立监控和屏蔽网络的基础设施,也使屏蔽网站和服务以及监控信息变得更加容易。虽然相关研究已经记录了非欧洲国家的互联网审查,以及出于政治原因使用此类基础设施,但本研究在几乎完全缺乏欧盟相关研究的背景下考察了网络干扰实践,如网站封锁。具体来说,它基于三个不同的来源对27个欧盟国家进行了分析。首先,由世界各地的网络干扰开放观测站志愿者在2020年收集的数千万个历史网络测量数据;其次是欧盟成员国使用的公开屏蔽名单;三是2020年5月至2021年4月各国网络监管机构发布的报告。我们的研究结果表明,当局发布了多种类型的黑名单。互联网服务提供商限制访问不同类型和类别的网站和服务。这些资源有时会因为未知的原因被封锁,并且不包括在任何公开的封锁列表中。该研究总结了与网络测量相关的障碍以及监管机构在阻止活动中指定网站地址方面的不透明度。
{"title":"Website blocking in the European Union: Network interference from the perspective of Open Internet","authors":"Vasilis Ververis, Lucas Lasota, Tatiana Ermakova, Benjamin Fabian","doi":"10.1002/poi3.367","DOIUrl":"https://doi.org/10.1002/poi3.367","url":null,"abstract":"Abstract By establishing an infrastructure for monitoring and blocking networks in accordance with European Union (EU) law on preventive measures against the spread of information, EU member states have also made it easier to block websites and services and monitor information. While relevant studies have documented Internet censorship in non‐European countries, as well as the use of such infrastructures for political reasons, this study examines network interference practices such as website blocking against the backdrop of an almost complete lack of EU‐related research. Specifically, it performs and demonstrates an analysis for the total of 27 EU countries based on three different sources. They include first, tens of millions of historical network measurements collected in 2020 by Open Observatory of Network Interference volunteers from around the world; second, the publicly available blocking lists used by EU member states; and third, the reports issued by network regulators in each country from May 2020 to April 2021. Our results show that authorities issue multiple types of blocklists. Internet Service Providers limit access to different types and categories of websites and services. Such resources are sometimes blocked for unknown reasons and not included in any of the publicly available blocklists. The study concludes with the hurdles related to network measurements and the nontransparency from regulators regarding specifying website addresses in blocking activities.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135060664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content moderation through removal of service: Content delivery networks and extremist websites 通过删除服务进行内容审核:内容交付网络和极端主义网站
1区 文学 Q1 COMMUNICATION Pub Date : 2023-09-18 DOI: 10.1002/poi3.370
Seán Looney
Abstract Considerable attention has been paid by researchers to social media platforms, especially the ‘big companies’, and increasingly also messaging applications, and how effectively they moderate extremist and terrorist content on their services. Much less attention has yet been paid to if and how infrastructure and service providers, further down ‘the tech stack’, deal with extremism and terrorism. Content Delivery Networks (CDN) such as Cloudflare play an underestimated role in moderating the presence of extremist and terrorist content online as it is impossible for these websites to operate without DDoS protection. This is evidenced by the takedown of a wide range websites such as The Daily Stormer, 8chan, a variety of Taliban websites and more recently the organised harassment site Kiwifarms following refusal of service by Cloudflare. However, it is unclear whether there is any formal process of content review conducted by the company when it decides to refuse services. This article aims to first provide an analysis of what extremist and terrorist websites make use of Cloudflare's services as well as other CDNs, and how many of them have been subject to takedown following refusal of service. Following this the article analyses CDNs' terms of service and how current and upcoming internet regulation applies to these CDNs.
研究人员非常关注社交媒体平台,特别是“大公司”,以及越来越多的消息应用程序,以及它们如何有效地调节其服务上的极端主义和恐怖主义内容。人们对基础设施和服务提供商(“技术堆栈”的下游)是否以及如何应对极端主义和恐怖主义的关注要少得多。Cloudflare等内容分发网络(CDN)在缓和极端主义和恐怖主义内容方面的作用被低估了,因为这些网站不可能在没有DDoS保护的情况下运行。在Cloudflare拒绝服务后,大量网站被关闭,如the Daily Stormer、8chan、各种塔利班网站,以及最近有组织的骚扰网站Kiwifarms,都证明了这一点。然而,目前尚不清楚,当该公司决定拒绝服务时,是否有任何正式的内容审查程序。本文旨在首先分析哪些极端主义和恐怖主义网站使用Cloudflare的服务以及其他cdn,以及其中有多少网站在拒绝服务后被删除。接下来,本文将分析cdn的服务条款,以及当前和即将出台的互联网法规如何适用于这些cdn。
{"title":"Content moderation through removal of service: Content delivery networks and extremist websites","authors":"Seán Looney","doi":"10.1002/poi3.370","DOIUrl":"https://doi.org/10.1002/poi3.370","url":null,"abstract":"Abstract Considerable attention has been paid by researchers to social media platforms, especially the ‘big companies’, and increasingly also messaging applications, and how effectively they moderate extremist and terrorist content on their services. Much less attention has yet been paid to if and how infrastructure and service providers, further down ‘the tech stack’, deal with extremism and terrorism. Content Delivery Networks (CDN) such as Cloudflare play an underestimated role in moderating the presence of extremist and terrorist content online as it is impossible for these websites to operate without DDoS protection. This is evidenced by the takedown of a wide range websites such as The Daily Stormer, 8chan, a variety of Taliban websites and more recently the organised harassment site Kiwifarms following refusal of service by Cloudflare. However, it is unclear whether there is any formal process of content review conducted by the company when it decides to refuse services. This article aims to first provide an analysis of what extremist and terrorist websites make use of Cloudflare's services as well as other CDNs, and how many of them have been subject to takedown following refusal of service. Following this the article analyses CDNs' terms of service and how current and upcoming internet regulation applies to these CDNs.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135208093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Follow to be followed: The centrality of MFAs in Twitter networks 追随被追随:mfa在Twitter网络中的中心地位
1区 文学 Q1 COMMUNICATION Pub Date : 2023-09-11 DOI: 10.1002/poi3.368
Ilan Manor, Elad Segev
Abstract This article outlines three major features of the digital society (information sharing, a levelled‐playing field, and reciprocal surveillance) and explores their manifestation in the field of diplomacy. The article analyzed the international network of 78 Ministries of Foreign Affairs (MFAs) on Twitter during the critical period of its growth between 2014 and 2016. To explain why some MFAs follow or are followed by their peers, both internal (Twitter) and external (gross domestic product) factors were considered. The analysis found the principle of digital reciprocity to be the most important factor in explaining an MFA's centrality. Ministries who follow their peers are more likely to be followed in return. Other factors that predict the popularity of MFAs among their peers are regionality, technological savviness, and national media environments. These findings provide a broader understanding of contemporary diplomacy and the fierce competition over attention in the digital society.
本文概述了数字社会的三个主要特征(信息共享、公平竞争环境和互惠监视),并探讨了它们在外交领域的表现。这篇文章分析了2014年到2016年这78个外交部在Twitter上的国际网络在其成长的关键时期。为了解释为什么一些mfa追随或被同行追随,我们考虑了内部(Twitter)和外部(国内生产总值)因素。分析发现,数字互惠原则是解释MFA中心性的最重要因素。追随同行的部委也更有可能被追随。预测mfa在其同行中受欢迎程度的其他因素是地域性、技术知识和国家媒体环境。这些发现提供了对当代外交和数字社会中对注意力的激烈竞争的更广泛的理解。
{"title":"Follow to be followed: The centrality of MFAs in Twitter networks","authors":"Ilan Manor, Elad Segev","doi":"10.1002/poi3.368","DOIUrl":"https://doi.org/10.1002/poi3.368","url":null,"abstract":"Abstract This article outlines three major features of the digital society (information sharing, a levelled‐playing field, and reciprocal surveillance) and explores their manifestation in the field of diplomacy. The article analyzed the international network of 78 Ministries of Foreign Affairs (MFAs) on Twitter during the critical period of its growth between 2014 and 2016. To explain why some MFAs follow or are followed by their peers, both internal (Twitter) and external (gross domestic product) factors were considered. The analysis found the principle of digital reciprocity to be the most important factor in explaining an MFA's centrality. Ministries who follow their peers are more likely to be followed in return. Other factors that predict the popularity of MFAs among their peers are regionality, technological savviness, and national media environments. These findings provide a broader understanding of contemporary diplomacy and the fierce competition over attention in the digital society.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136024120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency for what purpose?: Designing outcomes‐focused transparency tactics for digital platforms 透明的目的是什么?为数字平台设计以结果为中心的透明度策略
1区 文学 Q1 COMMUNICATION Pub Date : 2023-09-01 DOI: 10.1002/poi3.362
Yinuo Geng
Abstract Transparency has long been held up as the solution to the societal harms caused by digital platforms' use of algorithms. However, what transparency means, how to create meaningful transparency, and what behaviors can be altered through transparency are all ambiguous legal and policy questions. This paper argues for beginning with clarifying the desired outcome (the “why”) before focusing on transparency processes and tactics (the “how”). Moving beyond analyses of the ways algorithms impact human lives, this research articulates an approach that tests and implements the right set of transparency tactics aligned to specific predefined behavioral outcomes we want to see on digital platforms. To elaborate on this approach, three specific desirable behavioral outcomes are highlighted, to which potential transparency tactics are then mapped. No single set of transparency tactics can solve all the harms possible from digital platforms, making such an outcomes‐focused transparency tactic selection approach the best suited to the constantly‐evolving nature of algorithms, digital platforms, and our societies.
长期以来,透明度一直被认为是解决数字平台使用算法造成的社会危害的办法。然而,透明度意味着什么,如何创造有意义的透明度,以及通过透明度可以改变哪些行为,这些都是模棱两可的法律和政策问题。本文认为,在关注透明过程和策略(“如何”)之前,首先要澄清期望的结果(“为什么”)。除了分析算法影响人类生活的方式之外,本研究还阐明了一种方法,该方法可以测试和实施一套正确的透明度策略,这些策略与我们希望在数字平台上看到的特定预定义行为结果相一致。为了详细说明这种方法,强调了三种特定的理想行为结果,然后绘制了潜在的透明度策略。没有一套单一的透明度策略可以解决数字平台可能带来的所有危害,因此,这种以结果为中心的透明度策略选择方法最适合算法、数字平台和我们社会不断发展的本质。
{"title":"Transparency for what purpose?: Designing outcomes‐focused transparency tactics for digital platforms","authors":"Yinuo Geng","doi":"10.1002/poi3.362","DOIUrl":"https://doi.org/10.1002/poi3.362","url":null,"abstract":"Abstract Transparency has long been held up as the solution to the societal harms caused by digital platforms' use of algorithms. However, what transparency means, how to create meaningful transparency, and what behaviors can be altered through transparency are all ambiguous legal and policy questions. This paper argues for beginning with clarifying the desired outcome (the “why”) before focusing on transparency processes and tactics (the “how”). Moving beyond analyses of the ways algorithms impact human lives, this research articulates an approach that tests and implements the right set of transparency tactics aligned to specific predefined behavioral outcomes we want to see on digital platforms. To elaborate on this approach, three specific desirable behavioral outcomes are highlighted, to which potential transparency tactics are then mapped. No single set of transparency tactics can solve all the harms possible from digital platforms, making such an outcomes‐focused transparency tactic selection approach the best suited to the constantly‐evolving nature of algorithms, digital platforms, and our societies.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136236582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Policy and Internet
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1