Abstract In order to generate legitimacy for policies and political institutions, governments regularly involve citizens in the decision‐making process, increasingly so via the Internet. This research investigates if online participation does indeed impact positively on legitimacy beliefs of those citizens engaging with the process, and which particular aspects of the participation process, the individual participants, and the local context contribute to these changes. Our surveys of participants in almost identical online consultations in three German municipalities show that the participation process and its expected results have a sizeable effect on satisfaction with local political authorities and local regime performance. While most participants report at least slightly more positive perceptions that are mainly output‐oriented, for some engagement with the process leads not to more, but in fact to less legitimacy. We find this to be the case both for those participants who remain silent and for those who participate intensively. Our results also confirm the important role of existing individual resources and context‐related attitudes such as trust in and satisfaction with local (not national) politics. Finally, our analysis shows that online participation is able to enable constructive discussion, deliver useful results, and attract people who would not have participated offline to engage.
{"title":"Effects of online citizen participation on legitimacy beliefs in local government. Evidence from a comparative study of online participation platforms in three German municipalities","authors":"Tobias Escher, Bastian Rottinghaus","doi":"10.1002/poi3.371","DOIUrl":"https://doi.org/10.1002/poi3.371","url":null,"abstract":"Abstract In order to generate legitimacy for policies and political institutions, governments regularly involve citizens in the decision‐making process, increasingly so via the Internet. This research investigates if online participation does indeed impact positively on legitimacy beliefs of those citizens engaging with the process, and which particular aspects of the participation process, the individual participants, and the local context contribute to these changes. Our surveys of participants in almost identical online consultations in three German municipalities show that the participation process and its expected results have a sizeable effect on satisfaction with local political authorities and local regime performance. While most participants report at least slightly more positive perceptions that are mainly output‐oriented, for some engagement with the process leads not to more, but in fact to less legitimacy. We find this to be the case both for those participants who remain silent and for those who participate intensively. Our results also confirm the important role of existing individual resources and context‐related attitudes such as trust in and satisfaction with local (not national) politics. Finally, our analysis shows that online participation is able to enable constructive discussion, deliver useful results, and attract people who would not have participated offline to engage.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135291567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Since 2019, researchers examining, archiving, and collecting extremist and terrorist materials online have increasingly been taken offline. In part a consequence of the automation of content moderation by different technology companies and national governments calling for ever quicker takedowns. Based on an online survey of peers in the field, this research highlights that up to 60% of researchers surveyed have had either their accounts or content they have posted or stored online taken down from varying platforms. Beyond the quantitative data, this research also garnered qualitative answers about concerns individuals in the field had related to this problem set, namely, the lack of transparency on the part of the technology companies, hindering actual research and understanding of complicated and evolving issues related to different extremist and terrorist phenomena, undermining potential collaboration within the research field, and the potential of self‐censorship online. An easy solution to this would be a whitelist, though there are inherent downsides related to this as well, especially between researchers at different levels in their careers, institutional affiliation or lack thereof, and inequalities between researchers from the West versus Global South. Either way, securitizing research in however form it evolves in the future will fundamentally hurt research.
{"title":"“Highly nuanced policy is very difficult to apply at scale”: Examining researcher account and content takedowns online","authors":"Aaron Y. Zelin","doi":"10.1002/poi3.374","DOIUrl":"https://doi.org/10.1002/poi3.374","url":null,"abstract":"Abstract Since 2019, researchers examining, archiving, and collecting extremist and terrorist materials online have increasingly been taken offline. In part a consequence of the automation of content moderation by different technology companies and national governments calling for ever quicker takedowns. Based on an online survey of peers in the field, this research highlights that up to 60% of researchers surveyed have had either their accounts or content they have posted or stored online taken down from varying platforms. Beyond the quantitative data, this research also garnered qualitative answers about concerns individuals in the field had related to this problem set, namely, the lack of transparency on the part of the technology companies, hindering actual research and understanding of complicated and evolving issues related to different extremist and terrorist phenomena, undermining potential collaboration within the research field, and the potential of self‐censorship online. An easy solution to this would be a whitelist, though there are inherent downsides related to this as well, especially between researchers at different levels in their careers, institutional affiliation or lack thereof, and inequalities between researchers from the West versus Global South. Either way, securitizing research in however form it evolves in the future will fundamentally hurt research.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135634020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 201
{"title":"Special issue: The (international) politics of content takedowns: Theory, practice, ethics","authors":"James Fitzgerald, Ayse D. Lokmanoglu","doi":"10.1002/poi3.375","DOIUrl":"https://doi.org/10.1002/poi3.375","url":null,"abstract":"Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 201","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135636688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract After a period of self‐regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised for a variety of reasons, most prominently for concerns of infringing free speech and creating unfair burdens for smaller platforms. In addition to this, regulation is heavily centred around content moderation, however, fails to consider or address the psychosocial risks it poses to human content moderators. This paper argues that where regulation has been heavily criticised yet continues to inspire similar regulation a new regulatory approach is required. The aim of this paper is to undertake an introductory examination of the use of a social regulation approach in three other industries (environmental protection, consumer protection and occupational health and safety) to learn and investigate new regulatory avenues that could be applied to the development of new regulation that seeks to counter terrorist content on tech platforms and is concerned with the safety of content moderators.
{"title":"Countering online terrorist content: A social regulation approach","authors":"Amy‐Louise Watkin","doi":"10.1002/poi3.373","DOIUrl":"https://doi.org/10.1002/poi3.373","url":null,"abstract":"Abstract After a period of self‐regulation, countries around the world began to implement regulations for the removal of terrorist content from tech platforms. However, much of this regulation has been criticised for a variety of reasons, most prominently for concerns of infringing free speech and creating unfair burdens for smaller platforms. In addition to this, regulation is heavily centred around content moderation, however, fails to consider or address the psychosocial risks it poses to human content moderators. This paper argues that where regulation has been heavily criticised yet continues to inspire similar regulation a new regulatory approach is required. The aim of this paper is to undertake an introductory examination of the use of a social regulation approach in three other industries (environmental protection, consumer protection and occupational health and safety) to learn and investigate new regulatory avenues that could be applied to the development of new regulation that seeks to counter terrorist content on tech platforms and is concerned with the safety of content moderators.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"65 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136069295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media companies are increasingly transcending the offline sphere by shaping online discourse that has direct effects on offline outcomes. Recent polls have shown that as many as 70% of young people in the United States have used social media for information about political elections (Booth et al., 2020) and almost 30% of US adults have used social media to post about political and social issues (McClain, 2021). Further, social media have become a site of organizing with over half of US adults reporting having used social media as a tool for gathering or sharing information about political or social issues (Anderson et al., 2018). Despite the necessity of removing content that may breach the content guidelines set forth by social media companies such as Facebook, Instagram, Twitter, and TikTok, a gap persists between the content that violates guidelines and the content that is removed from social media sites. For activists particularly, content suppression is not only a matter of censorship at the individual level. During a time of significant mobilization, activists rely on their social media platforms perhaps more than ever before. This has been demonstrated by the Facebook Arabic page, “We Are All Khaled Said,” which has been credited with promoting the 2011 Egyptian Revolution (Alaimo, 2015). Activists posting about the Mahsa Amini protests and Ukrainians posting about the Russian invasion of Ukraine have reported similar experiences with recent Meta policy changes that have led to mass takedowns of protest footage and related content (Alimardani, 2022). The impacts of social media platforms' policy and practices for moderation are growing as cyberactivism has become more integral in social organizing (Cammaerts, 2015). However, due to accuracy and bias issues of content moderation algorithms deployed on these platforms (Binns et al., 2017; Buolamwini & Gebru, 2018; Rauchberg, 2022), engaging social media as a tool for social and political organizing is becoming more challenging. The intricacies and the downstream systemic effects of these content moderation techniques are not explicitly accounted for by social media platforms. Therefore, content moderation is pertinent to social media users based on the effects that moderation guidelines have not only on online behavior but also on offline behavior. The objectives of this paper are twofold. First and foremost, the goal of this paper is to contribute to the academic discourse raising awareness about how individuals are being silenced by content moderation algorithms on social media. This paper does this primarily by exploring the social and political implications of social media content moderation by framing them through the lens of activism and activist efforts online and offline. The secondary goal of this paper is to make a case for social media companies to develop features for individuals who are wrongfully marginalized on their platforms to be notified about and to appeal incidenc
{"title":"Content takedowns and activist organizing: Impact of social media content moderation on activists and organizing","authors":"Diane Jackson","doi":"10.1002/poi3.372","DOIUrl":"https://doi.org/10.1002/poi3.372","url":null,"abstract":"Social media companies are increasingly transcending the offline sphere by shaping online discourse that has direct effects on offline outcomes. Recent polls have shown that as many as 70% of young people in the United States have used social media for information about political elections (Booth et al., 2020) and almost 30% of US adults have used social media to post about political and social issues (McClain, 2021). Further, social media have become a site of organizing with over half of US adults reporting having used social media as a tool for gathering or sharing information about political or social issues (Anderson et al., 2018). Despite the necessity of removing content that may breach the content guidelines set forth by social media companies such as Facebook, Instagram, Twitter, and TikTok, a gap persists between the content that violates guidelines and the content that is removed from social media sites. For activists particularly, content suppression is not only a matter of censorship at the individual level. During a time of significant mobilization, activists rely on their social media platforms perhaps more than ever before. This has been demonstrated by the Facebook Arabic page, “We Are All Khaled Said,” which has been credited with promoting the 2011 Egyptian Revolution (Alaimo, 2015). Activists posting about the Mahsa Amini protests and Ukrainians posting about the Russian invasion of Ukraine have reported similar experiences with recent Meta policy changes that have led to mass takedowns of protest footage and related content (Alimardani, 2022). The impacts of social media platforms' policy and practices for moderation are growing as cyberactivism has become more integral in social organizing (Cammaerts, 2015). However, due to accuracy and bias issues of content moderation algorithms deployed on these platforms (Binns et al., 2017; Buolamwini & Gebru, 2018; Rauchberg, 2022), engaging social media as a tool for social and political organizing is becoming more challenging. The intricacies and the downstream systemic effects of these content moderation techniques are not explicitly accounted for by social media platforms. Therefore, content moderation is pertinent to social media users based on the effects that moderation guidelines have not only on online behavior but also on offline behavior. The objectives of this paper are twofold. First and foremost, the goal of this paper is to contribute to the academic discourse raising awareness about how individuals are being silenced by content moderation algorithms on social media. This paper does this primarily by exploring the social and political implications of social media content moderation by framing them through the lens of activism and activist efforts online and offline. The secondary goal of this paper is to make a case for social media companies to develop features for individuals who are wrongfully marginalized on their platforms to be notified about and to appeal incidenc","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"3 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136381285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Wade, Stephanie A. Baker, Michael J. Walsh
Abstract Crowdfunding platforms remain understudied as conduits for ideological struggle. While other social media platforms may enable the expression of hateful and harmful ideas, crowdfunding can actively facilitate their enaction through financial support. In addressing such risks, crowdfunding platforms attempt to mitigate complicity but retain legitimacy . That is, ensuring their fundraising tools are not exploited for intolerant, violent or hate‐based purposes, yet simultaneously avoiding restrictive policies that undermine their legitimacy as ‘open’ platforms. Although social media platforms are routinely scrutinized for enabling misinformation, hateful rhetoric and extremism, crowdfunding has largely escaped critical inquiry, despite being repeatedly implicated in amplifying such threats. Drawing on the ‘Freedom Convoy’ movement as a case study, this article employs critical discourse analysis to trace how crowdfunding platforms reveal their underlying values in privileging either collective safety or personal liberty when hosting divisive causes. The radically different policy decisions adopted by crowdfunding platforms GoFundMe and GiveSendGo expose a concerning divide between ‘Big Tech’ and ‘Alt‐Tech’ platforms regarding what harms they are willing to risk, and the ideological rationales through which these determinations are made. There remain relatively few regulatory safeguards guiding such impactful strategic choices, leaving crowdfunding platforms susceptible to weaponization. With Alt‐Tech platforms aspiring to build an ‘alternative internet’, this paper highlights the urgent need to explore digital constitutionalism in the crowdfunding space, establishing firmer boundaries to better mitigate fundraising platforms becoming complicit in catastrophic harms.
{"title":"Crowdfunding platforms as conduits for ideological struggle and extremism: On the need for greater regulation and digital constitutionalism","authors":"Matthew Wade, Stephanie A. Baker, Michael J. Walsh","doi":"10.1002/poi3.369","DOIUrl":"https://doi.org/10.1002/poi3.369","url":null,"abstract":"Abstract Crowdfunding platforms remain understudied as conduits for ideological struggle. While other social media platforms may enable the expression of hateful and harmful ideas, crowdfunding can actively facilitate their enaction through financial support. In addressing such risks, crowdfunding platforms attempt to mitigate complicity but retain legitimacy . That is, ensuring their fundraising tools are not exploited for intolerant, violent or hate‐based purposes, yet simultaneously avoiding restrictive policies that undermine their legitimacy as ‘open’ platforms. Although social media platforms are routinely scrutinized for enabling misinformation, hateful rhetoric and extremism, crowdfunding has largely escaped critical inquiry, despite being repeatedly implicated in amplifying such threats. Drawing on the ‘Freedom Convoy’ movement as a case study, this article employs critical discourse analysis to trace how crowdfunding platforms reveal their underlying values in privileging either collective safety or personal liberty when hosting divisive causes. The radically different policy decisions adopted by crowdfunding platforms GoFundMe and GiveSendGo expose a concerning divide between ‘Big Tech’ and ‘Alt‐Tech’ platforms regarding what harms they are willing to risk, and the ideological rationales through which these determinations are made. There remain relatively few regulatory safeguards guiding such impactful strategic choices, leaving crowdfunding platforms susceptible to weaponization. With Alt‐Tech platforms aspiring to build an ‘alternative internet’, this paper highlights the urgent need to explore digital constitutionalism in the crowdfunding space, establishing firmer boundaries to better mitigate fundraising platforms becoming complicit in catastrophic harms.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135925578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vasilis Ververis, Lucas Lasota, Tatiana Ermakova, Benjamin Fabian
Abstract By establishing an infrastructure for monitoring and blocking networks in accordance with European Union (EU) law on preventive measures against the spread of information, EU member states have also made it easier to block websites and services and monitor information. While relevant studies have documented Internet censorship in non‐European countries, as well as the use of such infrastructures for political reasons, this study examines network interference practices such as website blocking against the backdrop of an almost complete lack of EU‐related research. Specifically, it performs and demonstrates an analysis for the total of 27 EU countries based on three different sources. They include first, tens of millions of historical network measurements collected in 2020 by Open Observatory of Network Interference volunteers from around the world; second, the publicly available blocking lists used by EU member states; and third, the reports issued by network regulators in each country from May 2020 to April 2021. Our results show that authorities issue multiple types of blocklists. Internet Service Providers limit access to different types and categories of websites and services. Such resources are sometimes blocked for unknown reasons and not included in any of the publicly available blocklists. The study concludes with the hurdles related to network measurements and the nontransparency from regulators regarding specifying website addresses in blocking activities.
{"title":"Website blocking in the European Union: Network interference from the perspective of Open Internet","authors":"Vasilis Ververis, Lucas Lasota, Tatiana Ermakova, Benjamin Fabian","doi":"10.1002/poi3.367","DOIUrl":"https://doi.org/10.1002/poi3.367","url":null,"abstract":"Abstract By establishing an infrastructure for monitoring and blocking networks in accordance with European Union (EU) law on preventive measures against the spread of information, EU member states have also made it easier to block websites and services and monitor information. While relevant studies have documented Internet censorship in non‐European countries, as well as the use of such infrastructures for political reasons, this study examines network interference practices such as website blocking against the backdrop of an almost complete lack of EU‐related research. Specifically, it performs and demonstrates an analysis for the total of 27 EU countries based on three different sources. They include first, tens of millions of historical network measurements collected in 2020 by Open Observatory of Network Interference volunteers from around the world; second, the publicly available blocking lists used by EU member states; and third, the reports issued by network regulators in each country from May 2020 to April 2021. Our results show that authorities issue multiple types of blocklists. Internet Service Providers limit access to different types and categories of websites and services. Such resources are sometimes blocked for unknown reasons and not included in any of the publicly available blocklists. The study concludes with the hurdles related to network measurements and the nontransparency from regulators regarding specifying website addresses in blocking activities.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135060664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Considerable attention has been paid by researchers to social media platforms, especially the ‘big companies’, and increasingly also messaging applications, and how effectively they moderate extremist and terrorist content on their services. Much less attention has yet been paid to if and how infrastructure and service providers, further down ‘the tech stack’, deal with extremism and terrorism. Content Delivery Networks (CDN) such as Cloudflare play an underestimated role in moderating the presence of extremist and terrorist content online as it is impossible for these websites to operate without DDoS protection. This is evidenced by the takedown of a wide range websites such as The Daily Stormer, 8chan, a variety of Taliban websites and more recently the organised harassment site Kiwifarms following refusal of service by Cloudflare. However, it is unclear whether there is any formal process of content review conducted by the company when it decides to refuse services. This article aims to first provide an analysis of what extremist and terrorist websites make use of Cloudflare's services as well as other CDNs, and how many of them have been subject to takedown following refusal of service. Following this the article analyses CDNs' terms of service and how current and upcoming internet regulation applies to these CDNs.
{"title":"Content moderation through removal of service: Content delivery networks and extremist websites","authors":"Seán Looney","doi":"10.1002/poi3.370","DOIUrl":"https://doi.org/10.1002/poi3.370","url":null,"abstract":"Abstract Considerable attention has been paid by researchers to social media platforms, especially the ‘big companies’, and increasingly also messaging applications, and how effectively they moderate extremist and terrorist content on their services. Much less attention has yet been paid to if and how infrastructure and service providers, further down ‘the tech stack’, deal with extremism and terrorism. Content Delivery Networks (CDN) such as Cloudflare play an underestimated role in moderating the presence of extremist and terrorist content online as it is impossible for these websites to operate without DDoS protection. This is evidenced by the takedown of a wide range websites such as The Daily Stormer, 8chan, a variety of Taliban websites and more recently the organised harassment site Kiwifarms following refusal of service by Cloudflare. However, it is unclear whether there is any formal process of content review conducted by the company when it decides to refuse services. This article aims to first provide an analysis of what extremist and terrorist websites make use of Cloudflare's services as well as other CDNs, and how many of them have been subject to takedown following refusal of service. Following this the article analyses CDNs' terms of service and how current and upcoming internet regulation applies to these CDNs.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135208093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This article outlines three major features of the digital society (information sharing, a levelled‐playing field, and reciprocal surveillance) and explores their manifestation in the field of diplomacy. The article analyzed the international network of 78 Ministries of Foreign Affairs (MFAs) on Twitter during the critical period of its growth between 2014 and 2016. To explain why some MFAs follow or are followed by their peers, both internal (Twitter) and external (gross domestic product) factors were considered. The analysis found the principle of digital reciprocity to be the most important factor in explaining an MFA's centrality. Ministries who follow their peers are more likely to be followed in return. Other factors that predict the popularity of MFAs among their peers are regionality, technological savviness, and national media environments. These findings provide a broader understanding of contemporary diplomacy and the fierce competition over attention in the digital society.
{"title":"Follow to be followed: The centrality of MFAs in Twitter networks","authors":"Ilan Manor, Elad Segev","doi":"10.1002/poi3.368","DOIUrl":"https://doi.org/10.1002/poi3.368","url":null,"abstract":"Abstract This article outlines three major features of the digital society (information sharing, a levelled‐playing field, and reciprocal surveillance) and explores their manifestation in the field of diplomacy. The article analyzed the international network of 78 Ministries of Foreign Affairs (MFAs) on Twitter during the critical period of its growth between 2014 and 2016. To explain why some MFAs follow or are followed by their peers, both internal (Twitter) and external (gross domestic product) factors were considered. The analysis found the principle of digital reciprocity to be the most important factor in explaining an MFA's centrality. Ministries who follow their peers are more likely to be followed in return. Other factors that predict the popularity of MFAs among their peers are regionality, technological savviness, and national media environments. These findings provide a broader understanding of contemporary diplomacy and the fierce competition over attention in the digital society.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136024120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Transparency has long been held up as the solution to the societal harms caused by digital platforms' use of algorithms. However, what transparency means, how to create meaningful transparency, and what behaviors can be altered through transparency are all ambiguous legal and policy questions. This paper argues for beginning with clarifying the desired outcome (the “why”) before focusing on transparency processes and tactics (the “how”). Moving beyond analyses of the ways algorithms impact human lives, this research articulates an approach that tests and implements the right set of transparency tactics aligned to specific predefined behavioral outcomes we want to see on digital platforms. To elaborate on this approach, three specific desirable behavioral outcomes are highlighted, to which potential transparency tactics are then mapped. No single set of transparency tactics can solve all the harms possible from digital platforms, making such an outcomes‐focused transparency tactic selection approach the best suited to the constantly‐evolving nature of algorithms, digital platforms, and our societies.
{"title":"Transparency for what purpose?: Designing outcomes‐focused transparency tactics for digital platforms","authors":"Yinuo Geng","doi":"10.1002/poi3.362","DOIUrl":"https://doi.org/10.1002/poi3.362","url":null,"abstract":"Abstract Transparency has long been held up as the solution to the societal harms caused by digital platforms' use of algorithms. However, what transparency means, how to create meaningful transparency, and what behaviors can be altered through transparency are all ambiguous legal and policy questions. This paper argues for beginning with clarifying the desired outcome (the “why”) before focusing on transparency processes and tactics (the “how”). Moving beyond analyses of the ways algorithms impact human lives, this research articulates an approach that tests and implements the right set of transparency tactics aligned to specific predefined behavioral outcomes we want to see on digital platforms. To elaborate on this approach, three specific desirable behavioral outcomes are highlighted, to which potential transparency tactics are then mapped. No single set of transparency tactics can solve all the harms possible from digital platforms, making such an outcomes‐focused transparency tactic selection approach the best suited to the constantly‐evolving nature of algorithms, digital platforms, and our societies.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136236582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}