{"title":"Special issue: The (international) politics of content takedowns: Theory, practice, ethics","authors":"James Fitzgerald, Ayse D. Lokmanoglu","doi":"10.1002/poi3.375","DOIUrl":null,"url":null,"abstract":"Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 2010).2 Consequently, there is no single standard of content moderation that is applied by all tech companies, just as, clearly, there is no international governance of the World Wide Web (Wu, 2015).3 Concept moderation is, therefore, a concept born(e) of multiplicity, accounting for a range of actors that necessarily includes, but is not limited to, tech companies. We are more convinced by the holistic perspective of Gillespie at al. (2020), who define content moderation as: [T]he detection of, assessment of, and interventions taken on content or behavior deemed unacceptable by platforms or other information intermediaries, including the rules they impose, the human labor and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support it (Gillespie at al., 2020, p. 2) Divining a neat definition of content takedowns is a more difficult task for several reasons. First, there does not exist, to our knowledge, an authoritative definition of content takedowns comparable with, say, Kaplan and Haenlein (2010) and Gillespie et al. (2020). Second—and owing to the novelty of this Special Issue—most studies that engage with content takedowns tend to situate their analyses within the remit of content moderation, assuming recognition of “content takedowns” as a conceptual fait accompli (see, e.g., Lakomy, 2023). We note this trend not as a criticism, but an observation. Third, content takedowns have been studied across several academic fields, including legal studies, media studies, sociology and terrorism/extremism studies, entailing a panoply of contending assumptions and disciplinary tendencies—a useful definition of content takedowns pursuant to copyright law (see Bar-Ziv & Elkin-Koren, 2018), for example, does not quite speak to the intended breadth of this Special Issue. With these provisos to hand, we synthesize Gillespie et al. (2020) definition of content moderation with Singh and Bankston's (2018) extensive typology4 to define content takedowns as: The removal of “problematic content” by platforms, or other information intermediaries, pursuant to legal or policy requirements, which occur across categories that include, but are not limited to: Government and legal content demands; copyright requests; trademark requests; network shutdowns and service interruptions; Right to be Forgotten delisting requests and; Community guidelines-based removals. Having established basic parameters, we now turn to provide a commentary on some of the most substantial political dimensions of content moderation and content takedowns, before providing a brief, individual summary of each paper. In 2010, Tarterton Gillespie took aim at social media companies' collective assertion that they offered neutral sites for communication, on which debate—and politics—simply occurred. By gradually replacing self-descriptive terms like “company” and “service” with “platform,” these entities projected an image of neutrality—the “platform” being a “‘raised level surface' designed to facilitate some activity that will subsequently take place” (Gillespie, 2010, p. 350). Pure neutrality is, of course, an illusion and early expressions of political sympathies among these enterprises tended to reach for the assuring hand of free-market capitalism, ideally unbound by government regulation (see Fuchs, 2013). Google and YouTube, for example, positioned themselves as “champions of freedom of expression” (Gillespie, 2010, p. 356) and in response to a takedown request of Jihadi content on YouTube from US Senator Joe Lieberman, the platform qualified its partial fulfillment with a rejoinder that it “encourages free speech and defends everyone's right to express unpopular points of view…allow[ing] our users to view all acceptable content and make up their own minds’” (YouTube Team, 2008 in ibid, p. 356). From the vantage point of 2023, these assertions of neutrality appear misguided at best, or at the very least, naively insulated from the degree to which major social media platforms would come to occupy positions of regulatory power to rival, and in some cases usurp, the traditional roles of states (see Klonick, 2017). Social media companies' application of content moderation policies affords to them great power; yet by essentially “establishing norms of what information and behaviors are censored or promoted on platforms” (González-Bailón & Lelkes, 2023, p. 162) these curated positions do not merely spill down from a raised level surface onto society but are dialectically embedded in, and refract, society. To this extent, early scholarship distinguishing “offline” and “online” ontologies read as if penned in a different world; today, there is a much stronger consensus that the behaviors, policies and identities of social media platforms shape myriad realities and the horizons of possibility that lie therein—be it to protect or to weaken democratic guardrails (Campos Mello, 2020), to mould socialization patterns among teenagers (Bucknell Bossen & Kottasz, 2020) or to accommodate a notable rise in ADHD self-diagnoses (Yeung et al., 2022). The systematic moderation of content, then, is much more than a regulatory sop to the expectations (or legal demands) of states or governing bodies, it is the very means by which social media platforms procedurally generate their identities, mirrored by the (political) cultures that they permit to flourish within their realm. Simply put, content moderation shapes the world(s) in which we live. There is little question that social media companies recognize the power of content moderation as a fulcrum of their identities/brands, despite a longstanding lack of transparency on how and why moderation decisions are made in practice (see Gorwa & Ash, 2020; Looney, 2023). Indeed, one might say that this dynamic lay at the heart of Meta's 2023 launch of Threads: a major new social media platform that sought to build on a massive, pre-existing userbase. Ostensibly a Twitter clone and created in response to Elon Musk's takeover of that platform, Threads was pitched by Meta CEO Mark Zuckerberg as a “friendly place,” his initial posts making clear that a moderated culture of “kindness” would be Threads' “key to success” (Chan, 2023) and that by effectively outsourcing content moderation to its users (Nix, 2023), what one saw on Threads would, per Meta's Global Affairs President, Nick Clegg, “feel meaningful to you.” At Musk's Twitter—re-branded to “X” on 23 July 2023—a different kind of freedom was afforded to its users. Musk dissolved Twitter's Trust and Safety council—tasked with “addressing hate speech, child exploitation, suicide, self-harm and other problems on the platform” (O' Brien & Ortutay, 2022)—5 months into his tenure, reinstated previously banned accounts and re-constituted the platform's image as a bastion of open debate, enacting much looser content moderation standards apparently tweaked to fit Musk's (malleable) dedication to “free speech absolutism” (see Sullivan, 2023). Data suggest that this more “open” approach has, in less than 1 year, resulted in a significant increase in hate speech on the platform (Darcy, 2022), coupled with a boom in conspiracy-facing content (Center on Extremism, 2023) and disinformation at such scale that the European Commission, in September 2023, identified X as “the platform with the largest ratio of mis/disinformation posts” in the year-to-date (European Commission, 2023). The 2023 face-off between Threads and Twitter is an important cultural and political watermark and bears lessons for contemporary scholarship. At its most base, it symbolizes how two of the world's richest men have leaned into content moderation to recycle their personal identities (see Hulsemann, 2023) and to differentiate the ontologies and conversations that might be conjured on their platforms. Both are essentially re-asserting the 2008 refrains of YouTube and Google as idealistic defenders of free speech and purveyors of user power to take (back) control; but that world—and any semblance of plausible deniability—has gone. Too much scholarship has since proven a link between deleterious social media practices and democratic decline, extremism, abuse and misinformation, to take any contemporary claims to neutrality seriously. And so, far removed from Musk and Zuckerberg's apparently playful hints at a Mixed Martial Arts bout in the summer of 2023, Meta and X's loosening of content moderation standards—and the gutting of election integrity teams ahead of a record number of democratic elections in 2024 (Harbath & Khizanishvili, 2023)—speaks to the more serious matter of how content moderation is wedged in a contemporary clinch between democracy and autocracy, with clear consequences for international politics and the array of political actors who stand to be affected. Returning to Gillespie et al. (2020) definition of content moderation, decisions to include or exclude ultimately fall to the platforms, be they aided by AI, human labor or a combination of both (see Gorwa et al., 2020). Yet, for a fuller understanding, we must also consider the range of stakeholders who feed into, and are affected by, these policies. Exerting pressure from above, for example, the EU's Digital Services Act (DSA)—effective as of August 23, 2023—will surely temper the actions of large online platforms as the realities of regulation grind against the libertarian ideals upon which so many of these platforms have been built (Barbrook & Cameron, 1996; Marwick, 2017). Indeed, as Reem Ahmed argues in this Special Issue, Germany's pivotal Network Enforcement Act (NetzDG)—passed in 2017—not only overlaps with the DSA in respect of its legal parameters: its norm-building prowess—and influence on the DSA—marks a plot-point in a common struggle to “rein in Big Tech,” with its adjoining challenge to maintain a workable balance between liberty and security (see Bigo, 2016). This task is rendered difficult, but not impossible, by the comparative absence of state regulation in the United States (see Busch, 2023; Morar & dos Santos, 2020), though court cases brought by state legislators (in Florida and Texas) against tech companies for impeding freedom of speech (Zakrzewski, 2023) highlights that progress on top-down regulation of social media moves not as a monolith but is (also) tempered by bottom-up pressures wrought by civil society. These dueling pressures entail that regulatory momentum on content moderation unfolds slowly, but the passing of landmark legislation in alternative spheres of power, such as Brazil (Tomaz, 2023) and the UK (Satariano, 2023) offer additional markers for a quickening pace. As the regulatory policies of states crystalise into a new frontier of geopolitics, a liberal consensus on content moderation appears to be settling on the joint principles of “human autonomy, dignity and democracy” (Mansell, 2023, p. 145). These values form the basis of the European Commission's definitive goal for social media regulation: to set “an international benchmark for a regulatory approach to online intermediaries” (European Commission, 2022) that explicitly aligns with “the rights and responsibilities of users, intermediary platforms, and public authorities and is based on European values—including the respect of human rights, freedom, democracy, equality and the rule of law.” (European Commission, 2020). Contemporary ruptures in world politics—including the persistent agitation of authoritarian-populism (see Schäfer, 2022)—suggest that the path to this ideal will, at the very least, be fraught with resistance, bringing with it a hardened commitment on the part of extremists to resist sweeping changes that might harm their political projects (McNeil-Wilson & Flonk, 2023), not to mention their commercial interests (see Caplan & Gillespie, 2020). State, intra- or supra-state regulation may offer the most direct promise of meaningful change, but we must beware the “mythical claims about regulatory efficiency” (Mansell, 2023, p. 145) and temper expectations about what top-down regulation can achieve alone, however ambitious or laudable these moves may be. From below, content moderation practices (including content takedowns) are known to nourish, if not spark, political resistance, potentially invigorating global civil society—albeit often as an unintended consequence (see Alimardani & Elswah, 2021). The fight for visibility among content-creators in India, Indonesia, and Pakistan (Zeng & Kaye, 2022), content moderators (Roberts, 2019) and marginalized groups more generally (Jackson, 2023) shows that “offline” and “online” forces of marginalization fold into, and replicate, one another in a shared ontology. (Online) resistance against moderation and takedown measures therefore yields the potential for the re-constitution of (offline) identities and an attendant expansion of spaces to act, speak and constitute new political identities and collective actions (see West, 2017). If this dynamic exists, then it also applies to classifications of collective actors that do not fight for the same vision of social progress as identified above. As Fitzgerald and Gerrand (2023) and Mattheis and Kingdon (2023) point out, content takedowns and other moderation practices—far from erasing extremist identities on the part of far-right actors—can provide a boon to their collective ability to (self-)present as righteous resistors to the oppressive forces of censorship, while also “gaming” norms of content moderation and takedowns to ensure that the content they wish to push to sympathetic followers ultimately finds a way. The possibility for activist communities to “rage against the machine” (West, 2017) and work to emancipate themselves from the yoke of states/social media control is indeed a powerful, potentially transformative force that will, in addition to top-down measures, surely affect how the future of content moderation continues to shape international politics. The degree to which content takedowns, specifically, affect these processes warrants further inquiry and constitutes one of the central themes of this Special Issue. In closing, though we have couched much of our opening statement on the modern vagaries of moderation, we must give pause to the notion that the dilemmas and possibilities posed by content takedowns are inherently new. As Zhang (2023) argues the (political) regulation of how speech is permitted (or denied) speaks to a longstanding philosophical collision between institutional and governance cultures of democratic control—content takedowns simply speak to its latest frontier. Santini et al. (2023) show that although social media is to the fore in the spread of political misinformation, we cannot overlook how, in the case of Brazil, the nonremoval of problematic content ensures its magnification by the country's more powerful broadcast media. Finally, Watkin (2023) sees through content takedowns a most fundamental dynamic of power, being exploitative labor practices. Focusing on the mental harms caused to content moderators by sifting through, and taking down, terrorist media, she argues that a blueprint for their protection already exists: it simply needs requires to be reconfigured to a modern setting. In closing, there is much to ponder on the veracity of content moderation to either reflect or change the realities that define our fractured political landscape and the array of actors that operate in its spaces. This Special Issue intends to move the disciplinary conversation forward, sparking reflection and, we hope, further conversation on the theory, practice and ethics of content takedowns. The first article by Colten Meisner (2023) highlights the vulnerability of social media creators in the face of mass reporting—a targeted, automated strategy used to trigger content takedowns and account bans. This form of harassment utilizes platform infrastructures for community governance, leaving creators with few avenues of support and access to platform assistance after orchestrated attacks. By conducting interviews with affected creators, this article seeks to understand how content reporting tools can be weaponized, exposing creators to a world of challenges, including barriers to self-expression. The findings are crucial in understanding the weaponization of content take downs to “remove” voices from the public sphere. The impact of algorithmic content moderation practices on marginalized groups, particularly activists, is the emphasis of the second article by Diane Jackson. While previous research has explored the limitations of automated content moderation, this abstract places it in the context of global social movements. It illustrates how marginalized groups experience online oppression similar to their offline marginalization and discusses the ethical and political implications at various levels – individual, organizing, and societal. This article calls for a systemic consideration of the effects of algorithmic content moderation, including takedown measures, on both online and offline activism. Meiqing Zhang delves into the multilevel sources of contention in content removal policies on social media platforms. The author exposes the value conflicts inherent in content moderation, where competing democratic virtues collide. Philosophical debates wrestle with the institutionalization of speech censorship, while governance challenges arise in determining who should be responsible for content guidelines. Furthermore, operational issues surface with existing lexicon-based content deletion technologies that are prone to errors. This article invites us to ponder the clash of democratic values and the confusion surrounding governance in a digital public sphere. It calls for a new social consensus and legitimate processes to establish a mode of online speech governance aligned with democratic principles. The fourth article by Vivian Gerrand and James Fitzgerald juxtaposes conspiracy theories within the wellness and health industry. The article, grounded in political philosophy and inspired by Chantal Mouffe's work, delves into the impact of content takedowns on online community formation and the unintended consequences of takedown policies as a potential accelerant to this process. It focuses on the global wellness industry, using the case of Australian wellness chef Pete Evans to illustrate how content takedown policies can inadvertently foster extremist sentiments in counter-hegemonic discursive spaces. While technology-driven content moderation gains prominence, extremist groups and conspiracy theorists have become adept at manipulating media content and technological affordances to evade regulation. This article by Ashley Mathheis and Ashton Kingdon unveils three primary manipulation tactics—numerology, borderlands, and merchandising—used by extremists online. It transcends ideological boundaries to focus on the tactics themselves, offering case examples from various extremist ideologies. The analysis underscores the importance of understanding how extremists use manipulation strategies to “game” content moderation. It calls for demystification processes to be incorporated into content moderation settings, expanding our understanding of sociotechnical remedial measures. The sixth article by Sean Looney tackles the critical role of Content Delivery Networks (CDNs) in the internet ecosystem and their response to extremist and terrorist content hosted on their servers. Using the example of Cloudflare and Kiwifarms, it highlights the lack of a standardized approach to content moderation across CDNs. The CEO of Cloudflare's reluctance to intervene underscores the ethical dilemma faced by CDNs, while the subsequent actions of CDNs like Diamwall raise questions about the industry's obligations. The article emphasizes the need for clear rules and obligations in the realm of CDN services, particularly in light of the EU's Digital Services Acts 2022. In examining takedown policies of mainstream platforms, Marie Santini, Débora Salles, and Bruno Mattos explore YouTube's recommendation system using Brazil as their case study. Their experiment in understanding the recommendation system sheds light on the systematic powers of platforms. Contrary to the stated aim of reigning in extremist content, their findings demonstrated that YouTube systematically gave preference to Jovem Pan content, Brazil's largest conservative media outlet (akin to Fox News in the United States), and the non-removal of the “toxic” content. This article illustrates how the recommendation algorithm of a mainstream platform magnified the imbalance in the portrayal of political candidates, thereby exposing a stark regulatory asymmetry between traditional broadcast media and online platforms in Brazil. By using a non-Anglophone and Global South case study, the article powerfully demonstrates the intricate dynamics of online content recommendation, and its potential impact on shaping public opinion and discourse, in spheres of regulation that are less well known to international audiences. Moving into the “traditional” sense of content takedowns, the seventh article by Amy Louise Watkin takes a broader perspective by considering their regulatory aspects. It highlights the criticism surrounding existing regulations for the removal of terrorist content from tech platforms, particularly concerning issues of free speech and employee well-being. Drawing inspiration from social regulation approaches in other industries like environmental protection, consumer protection, and occupational health and safety, this article advocates for a new regulatory approach that addresses both content moderation and the safety of content moderators themselves. Delving into states as regulators, Richard McNeill-Wilson and Danielle Flonk focus on the conundrum in which the European Union finds itself, regarding its quest to combat far-right extremism online. Pressure to address this issue has led to the development of a European-wide response. However, this response has been characterized by a delicate balance between policy agreements among member states, resulting in a potentially concerning feature of policymaking. The article highlights the challenges posed by the broadening and loosening of definitions surrounding far-right extremist content. By combining primary sources including policy documents with interviews with EU politicians and practitioners, this article examines the framing and securitization of extremist content regulation over time. It reveals how the securitizing lens of counter-extremism may unintentionally complicate the development of coherent and effective responses to the far right. In understanding the power of content takedown on researchers, Aaron Zelin maps out the challenges and broader consequences of automated content takedown to researchers Zelin (2023). By surveying and interviewing researchers who work on extremist content and integrating their experiences and concerns, this manuscript challenges the binary approach to content takedown. More importantly, using this data provides possible policy solutions to governments and platforms to maximize the efficiency of content takedown on extremist content while minimizing the consequences for researchers. The final article by Reem Ahmed (2023) takes us to Germany, where the Netzwerkdurchsetzungsgesetz (NetzDG) has reshaped the landscape of state-regulated content takedowns. This pioneering act enforces offline legality online, presenting a blueprint for content moderation worldwide. However, concerns have arisen regarding the balance between freedom of expression and law enforcement in content moderation. Through examining NetzDG-related case law and disputed takedowns, this study identifies the main points of contention and underscores the role of judicial decisions in the broader regulatory discourse. It delves into the challenges of identifying illegal content online and the implications for content moderation practices, including content takedown.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":"26 1","pages":"0"},"PeriodicalIF":4.1000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Policy and Internet","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/poi3.375","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0
Abstract
Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 2010).2 Consequently, there is no single standard of content moderation that is applied by all tech companies, just as, clearly, there is no international governance of the World Wide Web (Wu, 2015).3 Concept moderation is, therefore, a concept born(e) of multiplicity, accounting for a range of actors that necessarily includes, but is not limited to, tech companies. We are more convinced by the holistic perspective of Gillespie at al. (2020), who define content moderation as: [T]he detection of, assessment of, and interventions taken on content or behavior deemed unacceptable by platforms or other information intermediaries, including the rules they impose, the human labor and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support it (Gillespie at al., 2020, p. 2) Divining a neat definition of content takedowns is a more difficult task for several reasons. First, there does not exist, to our knowledge, an authoritative definition of content takedowns comparable with, say, Kaplan and Haenlein (2010) and Gillespie et al. (2020). Second—and owing to the novelty of this Special Issue—most studies that engage with content takedowns tend to situate their analyses within the remit of content moderation, assuming recognition of “content takedowns” as a conceptual fait accompli (see, e.g., Lakomy, 2023). We note this trend not as a criticism, but an observation. Third, content takedowns have been studied across several academic fields, including legal studies, media studies, sociology and terrorism/extremism studies, entailing a panoply of contending assumptions and disciplinary tendencies—a useful definition of content takedowns pursuant to copyright law (see Bar-Ziv & Elkin-Koren, 2018), for example, does not quite speak to the intended breadth of this Special Issue. With these provisos to hand, we synthesize Gillespie et al. (2020) definition of content moderation with Singh and Bankston's (2018) extensive typology4 to define content takedowns as: The removal of “problematic content” by platforms, or other information intermediaries, pursuant to legal or policy requirements, which occur across categories that include, but are not limited to: Government and legal content demands; copyright requests; trademark requests; network shutdowns and service interruptions; Right to be Forgotten delisting requests and; Community guidelines-based removals. Having established basic parameters, we now turn to provide a commentary on some of the most substantial political dimensions of content moderation and content takedowns, before providing a brief, individual summary of each paper. In 2010, Tarterton Gillespie took aim at social media companies' collective assertion that they offered neutral sites for communication, on which debate—and politics—simply occurred. By gradually replacing self-descriptive terms like “company” and “service” with “platform,” these entities projected an image of neutrality—the “platform” being a “‘raised level surface' designed to facilitate some activity that will subsequently take place” (Gillespie, 2010, p. 350). Pure neutrality is, of course, an illusion and early expressions of political sympathies among these enterprises tended to reach for the assuring hand of free-market capitalism, ideally unbound by government regulation (see Fuchs, 2013). Google and YouTube, for example, positioned themselves as “champions of freedom of expression” (Gillespie, 2010, p. 356) and in response to a takedown request of Jihadi content on YouTube from US Senator Joe Lieberman, the platform qualified its partial fulfillment with a rejoinder that it “encourages free speech and defends everyone's right to express unpopular points of view…allow[ing] our users to view all acceptable content and make up their own minds’” (YouTube Team, 2008 in ibid, p. 356). From the vantage point of 2023, these assertions of neutrality appear misguided at best, or at the very least, naively insulated from the degree to which major social media platforms would come to occupy positions of regulatory power to rival, and in some cases usurp, the traditional roles of states (see Klonick, 2017). Social media companies' application of content moderation policies affords to them great power; yet by essentially “establishing norms of what information and behaviors are censored or promoted on platforms” (González-Bailón & Lelkes, 2023, p. 162) these curated positions do not merely spill down from a raised level surface onto society but are dialectically embedded in, and refract, society. To this extent, early scholarship distinguishing “offline” and “online” ontologies read as if penned in a different world; today, there is a much stronger consensus that the behaviors, policies and identities of social media platforms shape myriad realities and the horizons of possibility that lie therein—be it to protect or to weaken democratic guardrails (Campos Mello, 2020), to mould socialization patterns among teenagers (Bucknell Bossen & Kottasz, 2020) or to accommodate a notable rise in ADHD self-diagnoses (Yeung et al., 2022). The systematic moderation of content, then, is much more than a regulatory sop to the expectations (or legal demands) of states or governing bodies, it is the very means by which social media platforms procedurally generate their identities, mirrored by the (political) cultures that they permit to flourish within their realm. Simply put, content moderation shapes the world(s) in which we live. There is little question that social media companies recognize the power of content moderation as a fulcrum of their identities/brands, despite a longstanding lack of transparency on how and why moderation decisions are made in practice (see Gorwa & Ash, 2020; Looney, 2023). Indeed, one might say that this dynamic lay at the heart of Meta's 2023 launch of Threads: a major new social media platform that sought to build on a massive, pre-existing userbase. Ostensibly a Twitter clone and created in response to Elon Musk's takeover of that platform, Threads was pitched by Meta CEO Mark Zuckerberg as a “friendly place,” his initial posts making clear that a moderated culture of “kindness” would be Threads' “key to success” (Chan, 2023) and that by effectively outsourcing content moderation to its users (Nix, 2023), what one saw on Threads would, per Meta's Global Affairs President, Nick Clegg, “feel meaningful to you.” At Musk's Twitter—re-branded to “X” on 23 July 2023—a different kind of freedom was afforded to its users. Musk dissolved Twitter's Trust and Safety council—tasked with “addressing hate speech, child exploitation, suicide, self-harm and other problems on the platform” (O' Brien & Ortutay, 2022)—5 months into his tenure, reinstated previously banned accounts and re-constituted the platform's image as a bastion of open debate, enacting much looser content moderation standards apparently tweaked to fit Musk's (malleable) dedication to “free speech absolutism” (see Sullivan, 2023). Data suggest that this more “open” approach has, in less than 1 year, resulted in a significant increase in hate speech on the platform (Darcy, 2022), coupled with a boom in conspiracy-facing content (Center on Extremism, 2023) and disinformation at such scale that the European Commission, in September 2023, identified X as “the platform with the largest ratio of mis/disinformation posts” in the year-to-date (European Commission, 2023). The 2023 face-off between Threads and Twitter is an important cultural and political watermark and bears lessons for contemporary scholarship. At its most base, it symbolizes how two of the world's richest men have leaned into content moderation to recycle their personal identities (see Hulsemann, 2023) and to differentiate the ontologies and conversations that might be conjured on their platforms. Both are essentially re-asserting the 2008 refrains of YouTube and Google as idealistic defenders of free speech and purveyors of user power to take (back) control; but that world—and any semblance of plausible deniability—has gone. Too much scholarship has since proven a link between deleterious social media practices and democratic decline, extremism, abuse and misinformation, to take any contemporary claims to neutrality seriously. And so, far removed from Musk and Zuckerberg's apparently playful hints at a Mixed Martial Arts bout in the summer of 2023, Meta and X's loosening of content moderation standards—and the gutting of election integrity teams ahead of a record number of democratic elections in 2024 (Harbath & Khizanishvili, 2023)—speaks to the more serious matter of how content moderation is wedged in a contemporary clinch between democracy and autocracy, with clear consequences for international politics and the array of political actors who stand to be affected. Returning to Gillespie et al. (2020) definition of content moderation, decisions to include or exclude ultimately fall to the platforms, be they aided by AI, human labor or a combination of both (see Gorwa et al., 2020). Yet, for a fuller understanding, we must also consider the range of stakeholders who feed into, and are affected by, these policies. Exerting pressure from above, for example, the EU's Digital Services Act (DSA)—effective as of August 23, 2023—will surely temper the actions of large online platforms as the realities of regulation grind against the libertarian ideals upon which so many of these platforms have been built (Barbrook & Cameron, 1996; Marwick, 2017). Indeed, as Reem Ahmed argues in this Special Issue, Germany's pivotal Network Enforcement Act (NetzDG)—passed in 2017—not only overlaps with the DSA in respect of its legal parameters: its norm-building prowess—and influence on the DSA—marks a plot-point in a common struggle to “rein in Big Tech,” with its adjoining challenge to maintain a workable balance between liberty and security (see Bigo, 2016). This task is rendered difficult, but not impossible, by the comparative absence of state regulation in the United States (see Busch, 2023; Morar & dos Santos, 2020), though court cases brought by state legislators (in Florida and Texas) against tech companies for impeding freedom of speech (Zakrzewski, 2023) highlights that progress on top-down regulation of social media moves not as a monolith but is (also) tempered by bottom-up pressures wrought by civil society. These dueling pressures entail that regulatory momentum on content moderation unfolds slowly, but the passing of landmark legislation in alternative spheres of power, such as Brazil (Tomaz, 2023) and the UK (Satariano, 2023) offer additional markers for a quickening pace. As the regulatory policies of states crystalise into a new frontier of geopolitics, a liberal consensus on content moderation appears to be settling on the joint principles of “human autonomy, dignity and democracy” (Mansell, 2023, p. 145). These values form the basis of the European Commission's definitive goal for social media regulation: to set “an international benchmark for a regulatory approach to online intermediaries” (European Commission, 2022) that explicitly aligns with “the rights and responsibilities of users, intermediary platforms, and public authorities and is based on European values—including the respect of human rights, freedom, democracy, equality and the rule of law.” (European Commission, 2020). Contemporary ruptures in world politics—including the persistent agitation of authoritarian-populism (see Schäfer, 2022)—suggest that the path to this ideal will, at the very least, be fraught with resistance, bringing with it a hardened commitment on the part of extremists to resist sweeping changes that might harm their political projects (McNeil-Wilson & Flonk, 2023), not to mention their commercial interests (see Caplan & Gillespie, 2020). State, intra- or supra-state regulation may offer the most direct promise of meaningful change, but we must beware the “mythical claims about regulatory efficiency” (Mansell, 2023, p. 145) and temper expectations about what top-down regulation can achieve alone, however ambitious or laudable these moves may be. From below, content moderation practices (including content takedowns) are known to nourish, if not spark, political resistance, potentially invigorating global civil society—albeit often as an unintended consequence (see Alimardani & Elswah, 2021). The fight for visibility among content-creators in India, Indonesia, and Pakistan (Zeng & Kaye, 2022), content moderators (Roberts, 2019) and marginalized groups more generally (Jackson, 2023) shows that “offline” and “online” forces of marginalization fold into, and replicate, one another in a shared ontology. (Online) resistance against moderation and takedown measures therefore yields the potential for the re-constitution of (offline) identities and an attendant expansion of spaces to act, speak and constitute new political identities and collective actions (see West, 2017). If this dynamic exists, then it also applies to classifications of collective actors that do not fight for the same vision of social progress as identified above. As Fitzgerald and Gerrand (2023) and Mattheis and Kingdon (2023) point out, content takedowns and other moderation practices—far from erasing extremist identities on the part of far-right actors—can provide a boon to their collective ability to (self-)present as righteous resistors to the oppressive forces of censorship, while also “gaming” norms of content moderation and takedowns to ensure that the content they wish to push to sympathetic followers ultimately finds a way. The possibility for activist communities to “rage against the machine” (West, 2017) and work to emancipate themselves from the yoke of states/social media control is indeed a powerful, potentially transformative force that will, in addition to top-down measures, surely affect how the future of content moderation continues to shape international politics. The degree to which content takedowns, specifically, affect these processes warrants further inquiry and constitutes one of the central themes of this Special Issue. In closing, though we have couched much of our opening statement on the modern vagaries of moderation, we must give pause to the notion that the dilemmas and possibilities posed by content takedowns are inherently new. As Zhang (2023) argues the (political) regulation of how speech is permitted (or denied) speaks to a longstanding philosophical collision between institutional and governance cultures of democratic control—content takedowns simply speak to its latest frontier. Santini et al. (2023) show that although social media is to the fore in the spread of political misinformation, we cannot overlook how, in the case of Brazil, the nonremoval of problematic content ensures its magnification by the country's more powerful broadcast media. Finally, Watkin (2023) sees through content takedowns a most fundamental dynamic of power, being exploitative labor practices. Focusing on the mental harms caused to content moderators by sifting through, and taking down, terrorist media, she argues that a blueprint for their protection already exists: it simply needs requires to be reconfigured to a modern setting. In closing, there is much to ponder on the veracity of content moderation to either reflect or change the realities that define our fractured political landscape and the array of actors that operate in its spaces. This Special Issue intends to move the disciplinary conversation forward, sparking reflection and, we hope, further conversation on the theory, practice and ethics of content takedowns. The first article by Colten Meisner (2023) highlights the vulnerability of social media creators in the face of mass reporting—a targeted, automated strategy used to trigger content takedowns and account bans. This form of harassment utilizes platform infrastructures for community governance, leaving creators with few avenues of support and access to platform assistance after orchestrated attacks. By conducting interviews with affected creators, this article seeks to understand how content reporting tools can be weaponized, exposing creators to a world of challenges, including barriers to self-expression. The findings are crucial in understanding the weaponization of content take downs to “remove” voices from the public sphere. The impact of algorithmic content moderation practices on marginalized groups, particularly activists, is the emphasis of the second article by Diane Jackson. While previous research has explored the limitations of automated content moderation, this abstract places it in the context of global social movements. It illustrates how marginalized groups experience online oppression similar to their offline marginalization and discusses the ethical and political implications at various levels – individual, organizing, and societal. This article calls for a systemic consideration of the effects of algorithmic content moderation, including takedown measures, on both online and offline activism. Meiqing Zhang delves into the multilevel sources of contention in content removal policies on social media platforms. The author exposes the value conflicts inherent in content moderation, where competing democratic virtues collide. Philosophical debates wrestle with the institutionalization of speech censorship, while governance challenges arise in determining who should be responsible for content guidelines. Furthermore, operational issues surface with existing lexicon-based content deletion technologies that are prone to errors. This article invites us to ponder the clash of democratic values and the confusion surrounding governance in a digital public sphere. It calls for a new social consensus and legitimate processes to establish a mode of online speech governance aligned with democratic principles. The fourth article by Vivian Gerrand and James Fitzgerald juxtaposes conspiracy theories within the wellness and health industry. The article, grounded in political philosophy and inspired by Chantal Mouffe's work, delves into the impact of content takedowns on online community formation and the unintended consequences of takedown policies as a potential accelerant to this process. It focuses on the global wellness industry, using the case of Australian wellness chef Pete Evans to illustrate how content takedown policies can inadvertently foster extremist sentiments in counter-hegemonic discursive spaces. While technology-driven content moderation gains prominence, extremist groups and conspiracy theorists have become adept at manipulating media content and technological affordances to evade regulation. This article by Ashley Mathheis and Ashton Kingdon unveils three primary manipulation tactics—numerology, borderlands, and merchandising—used by extremists online. It transcends ideological boundaries to focus on the tactics themselves, offering case examples from various extremist ideologies. The analysis underscores the importance of understanding how extremists use manipulation strategies to “game” content moderation. It calls for demystification processes to be incorporated into content moderation settings, expanding our understanding of sociotechnical remedial measures. The sixth article by Sean Looney tackles the critical role of Content Delivery Networks (CDNs) in the internet ecosystem and their response to extremist and terrorist content hosted on their servers. Using the example of Cloudflare and Kiwifarms, it highlights the lack of a standardized approach to content moderation across CDNs. The CEO of Cloudflare's reluctance to intervene underscores the ethical dilemma faced by CDNs, while the subsequent actions of CDNs like Diamwall raise questions about the industry's obligations. The article emphasizes the need for clear rules and obligations in the realm of CDN services, particularly in light of the EU's Digital Services Acts 2022. In examining takedown policies of mainstream platforms, Marie Santini, Débora Salles, and Bruno Mattos explore YouTube's recommendation system using Brazil as their case study. Their experiment in understanding the recommendation system sheds light on the systematic powers of platforms. Contrary to the stated aim of reigning in extremist content, their findings demonstrated that YouTube systematically gave preference to Jovem Pan content, Brazil's largest conservative media outlet (akin to Fox News in the United States), and the non-removal of the “toxic” content. This article illustrates how the recommendation algorithm of a mainstream platform magnified the imbalance in the portrayal of political candidates, thereby exposing a stark regulatory asymmetry between traditional broadcast media and online platforms in Brazil. By using a non-Anglophone and Global South case study, the article powerfully demonstrates the intricate dynamics of online content recommendation, and its potential impact on shaping public opinion and discourse, in spheres of regulation that are less well known to international audiences. Moving into the “traditional” sense of content takedowns, the seventh article by Amy Louise Watkin takes a broader perspective by considering their regulatory aspects. It highlights the criticism surrounding existing regulations for the removal of terrorist content from tech platforms, particularly concerning issues of free speech and employee well-being. Drawing inspiration from social regulation approaches in other industries like environmental protection, consumer protection, and occupational health and safety, this article advocates for a new regulatory approach that addresses both content moderation and the safety of content moderators themselves. Delving into states as regulators, Richard McNeill-Wilson and Danielle Flonk focus on the conundrum in which the European Union finds itself, regarding its quest to combat far-right extremism online. Pressure to address this issue has led to the development of a European-wide response. However, this response has been characterized by a delicate balance between policy agreements among member states, resulting in a potentially concerning feature of policymaking. The article highlights the challenges posed by the broadening and loosening of definitions surrounding far-right extremist content. By combining primary sources including policy documents with interviews with EU politicians and practitioners, this article examines the framing and securitization of extremist content regulation over time. It reveals how the securitizing lens of counter-extremism may unintentionally complicate the development of coherent and effective responses to the far right. In understanding the power of content takedown on researchers, Aaron Zelin maps out the challenges and broader consequences of automated content takedown to researchers Zelin (2023). By surveying and interviewing researchers who work on extremist content and integrating their experiences and concerns, this manuscript challenges the binary approach to content takedown. More importantly, using this data provides possible policy solutions to governments and platforms to maximize the efficiency of content takedown on extremist content while minimizing the consequences for researchers. The final article by Reem Ahmed (2023) takes us to Germany, where the Netzwerkdurchsetzungsgesetz (NetzDG) has reshaped the landscape of state-regulated content takedowns. This pioneering act enforces offline legality online, presenting a blueprint for content moderation worldwide. However, concerns have arisen regarding the balance between freedom of expression and law enforcement in content moderation. Through examining NetzDG-related case law and disputed takedowns, this study identifies the main points of contention and underscores the role of judicial decisions in the broader regulatory discourse. It delves into the challenges of identifying illegal content online and the implications for content moderation practices, including content takedown.
期刊介绍:
Understanding public policy in the age of the Internet requires understanding how individuals, organizations, governments and networks behave, and what motivates them in this new environment. Technological innovation and internet-mediated interaction raise both challenges and opportunities for public policy: whether in areas that have received much work already (e.g. digital divides, digital government, and privacy) or newer areas, like regulation of data-intensive technologies and platforms, the rise of precarious labour, and regulatory responses to misinformation and hate speech. We welcome innovative research in areas where the Internet already impacts public policy, where it raises new challenges or dilemmas, or provides opportunities for policy that is smart and equitable. While we welcome perspectives from any academic discipline, we look particularly for insight that can feed into social science disciplines like political science, public administration, economics, sociology, and communication. We welcome articles that introduce methodological innovation, theoretical development, or rigorous data analysis concerning a particular question or problem of public policy.