Special issue: The (international) politics of content takedowns: Theory, practice, ethics

IF 4.1 1区 文学 Q1 COMMUNICATION Policy and Internet Pub Date : 2023-11-06 DOI:10.1002/poi3.375
James Fitzgerald, Ayse D. Lokmanoglu
{"title":"Special issue: The (international) politics of content takedowns: Theory, practice, ethics","authors":"James Fitzgerald, Ayse D. Lokmanoglu","doi":"10.1002/poi3.375","DOIUrl":null,"url":null,"abstract":"Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 2010).2 Consequently, there is no single standard of content moderation that is applied by all tech companies, just as, clearly, there is no international governance of the World Wide Web (Wu, 2015).3 Concept moderation is, therefore, a concept born(e) of multiplicity, accounting for a range of actors that necessarily includes, but is not limited to, tech companies. We are more convinced by the holistic perspective of Gillespie at al. (2020), who define content moderation as: [T]he detection of, assessment of, and interventions taken on content or behavior deemed unacceptable by platforms or other information intermediaries, including the rules they impose, the human labor and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support it (Gillespie at al., 2020, p. 2) Divining a neat definition of content takedowns is a more difficult task for several reasons. First, there does not exist, to our knowledge, an authoritative definition of content takedowns comparable with, say, Kaplan and Haenlein (2010) and Gillespie et al. (2020). Second—and owing to the novelty of this Special Issue—most studies that engage with content takedowns tend to situate their analyses within the remit of content moderation, assuming recognition of “content takedowns” as a conceptual fait accompli (see, e.g., Lakomy, 2023). We note this trend not as a criticism, but an observation. Third, content takedowns have been studied across several academic fields, including legal studies, media studies, sociology and terrorism/extremism studies, entailing a panoply of contending assumptions and disciplinary tendencies—a useful definition of content takedowns pursuant to copyright law (see Bar-Ziv & Elkin-Koren, 2018), for example, does not quite speak to the intended breadth of this Special Issue. With these provisos to hand, we synthesize Gillespie et al. (2020) definition of content moderation with Singh and Bankston's (2018) extensive typology4 to define content takedowns as: The removal of “problematic content” by platforms, or other information intermediaries, pursuant to legal or policy requirements, which occur across categories that include, but are not limited to: Government and legal content demands; copyright requests; trademark requests; network shutdowns and service interruptions; Right to be Forgotten delisting requests and; Community guidelines-based removals. Having established basic parameters, we now turn to provide a commentary on some of the most substantial political dimensions of content moderation and content takedowns, before providing a brief, individual summary of each paper. In 2010, Tarterton Gillespie took aim at social media companies' collective assertion that they offered neutral sites for communication, on which debate—and politics—simply occurred. By gradually replacing self-descriptive terms like “company” and “service” with “platform,” these entities projected an image of neutrality—the “platform” being a “‘raised level surface' designed to facilitate some activity that will subsequently take place” (Gillespie, 2010, p. 350). Pure neutrality is, of course, an illusion and early expressions of political sympathies among these enterprises tended to reach for the assuring hand of free-market capitalism, ideally unbound by government regulation (see Fuchs, 2013). Google and YouTube, for example, positioned themselves as “champions of freedom of expression” (Gillespie, 2010, p. 356) and in response to a takedown request of Jihadi content on YouTube from US Senator Joe Lieberman, the platform qualified its partial fulfillment with a rejoinder that it “encourages free speech and defends everyone's right to express unpopular points of view…allow[ing] our users to view all acceptable content and make up their own minds’” (YouTube Team, 2008 in ibid, p. 356). From the vantage point of 2023, these assertions of neutrality appear misguided at best, or at the very least, naively insulated from the degree to which major social media platforms would come to occupy positions of regulatory power to rival, and in some cases usurp, the traditional roles of states (see Klonick, 2017). Social media companies' application of content moderation policies affords to them great power; yet by essentially “establishing norms of what information and behaviors are censored or promoted on platforms” (González-Bailón & Lelkes, 2023, p. 162) these curated positions do not merely spill down from a raised level surface onto society but are dialectically embedded in, and refract, society. To this extent, early scholarship distinguishing “offline” and “online” ontologies read as if penned in a different world; today, there is a much stronger consensus that the behaviors, policies and identities of social media platforms shape myriad realities and the horizons of possibility that lie therein—be it to protect or to weaken democratic guardrails (Campos Mello, 2020), to mould socialization patterns among teenagers (Bucknell Bossen & Kottasz, 2020) or to accommodate a notable rise in ADHD self-diagnoses (Yeung et al., 2022). The systematic moderation of content, then, is much more than a regulatory sop to the expectations (or legal demands) of states or governing bodies, it is the very means by which social media platforms procedurally generate their identities, mirrored by the (political) cultures that they permit to flourish within their realm. Simply put, content moderation shapes the world(s) in which we live. There is little question that social media companies recognize the power of content moderation as a fulcrum of their identities/brands, despite a longstanding lack of transparency on how and why moderation decisions are made in practice (see Gorwa & Ash, 2020; Looney, 2023). Indeed, one might say that this dynamic lay at the heart of Meta's 2023 launch of Threads: a major new social media platform that sought to build on a massive, pre-existing userbase. Ostensibly a Twitter clone and created in response to Elon Musk's takeover of that platform, Threads was pitched by Meta CEO Mark Zuckerberg as a “friendly place,” his initial posts making clear that a moderated culture of “kindness” would be Threads' “key to success” (Chan, 2023) and that by effectively outsourcing content moderation to its users (Nix, 2023), what one saw on Threads would, per Meta's Global Affairs President, Nick Clegg, “feel meaningful to you.” At Musk's Twitter—re-branded to “X” on 23 July 2023—a different kind of freedom was afforded to its users. Musk dissolved Twitter's Trust and Safety council—tasked with “addressing hate speech, child exploitation, suicide, self-harm and other problems on the platform” (O' Brien & Ortutay, 2022)—5 months into his tenure, reinstated previously banned accounts and re-constituted the platform's image as a bastion of open debate, enacting much looser content moderation standards apparently tweaked to fit Musk's (malleable) dedication to “free speech absolutism” (see Sullivan, 2023). Data suggest that this more “open” approach has, in less than 1 year, resulted in a significant increase in hate speech on the platform (Darcy, 2022), coupled with a boom in conspiracy-facing content (Center on Extremism, 2023) and disinformation at such scale that the European Commission, in September 2023, identified X as “the platform with the largest ratio of mis/disinformation posts” in the year-to-date (European Commission, 2023). The 2023 face-off between Threads and Twitter is an important cultural and political watermark and bears lessons for contemporary scholarship. At its most base, it symbolizes how two of the world's richest men have leaned into content moderation to recycle their personal identities (see Hulsemann, 2023) and to differentiate the ontologies and conversations that might be conjured on their platforms. Both are essentially re-asserting the 2008 refrains of YouTube and Google as idealistic defenders of free speech and purveyors of user power to take (back) control; but that world—and any semblance of plausible deniability—has gone. Too much scholarship has since proven a link between deleterious social media practices and democratic decline, extremism, abuse and misinformation, to take any contemporary claims to neutrality seriously. And so, far removed from Musk and Zuckerberg's apparently playful hints at a Mixed Martial Arts bout in the summer of 2023, Meta and X's loosening of content moderation standards—and the gutting of election integrity teams ahead of a record number of democratic elections in 2024 (Harbath & Khizanishvili, 2023)—speaks to the more serious matter of how content moderation is wedged in a contemporary clinch between democracy and autocracy, with clear consequences for international politics and the array of political actors who stand to be affected. Returning to Gillespie et al. (2020) definition of content moderation, decisions to include or exclude ultimately fall to the platforms, be they aided by AI, human labor or a combination of both (see Gorwa et al., 2020). Yet, for a fuller understanding, we must also consider the range of stakeholders who feed into, and are affected by, these policies. Exerting pressure from above, for example, the EU's Digital Services Act (DSA)—effective as of August 23, 2023—will surely temper the actions of large online platforms as the realities of regulation grind against the libertarian ideals upon which so many of these platforms have been built (Barbrook & Cameron, 1996; Marwick, 2017). Indeed, as Reem Ahmed argues in this Special Issue, Germany's pivotal Network Enforcement Act (NetzDG)—passed in 2017—not only overlaps with the DSA in respect of its legal parameters: its norm-building prowess—and influence on the DSA—marks a plot-point in a common struggle to “rein in Big Tech,” with its adjoining challenge to maintain a workable balance between liberty and security (see Bigo, 2016). This task is rendered difficult, but not impossible, by the comparative absence of state regulation in the United States (see Busch, 2023; Morar & dos Santos, 2020), though court cases brought by state legislators (in Florida and Texas) against tech companies for impeding freedom of speech (Zakrzewski, 2023) highlights that progress on top-down regulation of social media moves not as a monolith but is (also) tempered by bottom-up pressures wrought by civil society. These dueling pressures entail that regulatory momentum on content moderation unfolds slowly, but the passing of landmark legislation in alternative spheres of power, such as Brazil (Tomaz, 2023) and the UK (Satariano, 2023) offer additional markers for a quickening pace. As the regulatory policies of states crystalise into a new frontier of geopolitics, a liberal consensus on content moderation appears to be settling on the joint principles of “human autonomy, dignity and democracy” (Mansell, 2023, p. 145). These values form the basis of the European Commission's definitive goal for social media regulation: to set “an international benchmark for a regulatory approach to online intermediaries” (European Commission, 2022) that explicitly aligns with “the rights and responsibilities of users, intermediary platforms, and public authorities and is based on European values—including the respect of human rights, freedom, democracy, equality and the rule of law.” (European Commission, 2020). Contemporary ruptures in world politics—including the persistent agitation of authoritarian-populism (see Schäfer, 2022)—suggest that the path to this ideal will, at the very least, be fraught with resistance, bringing with it a hardened commitment on the part of extremists to resist sweeping changes that might harm their political projects (McNeil-Wilson & Flonk, 2023), not to mention their commercial interests (see Caplan & Gillespie, 2020). State, intra- or supra-state regulation may offer the most direct promise of meaningful change, but we must beware the “mythical claims about regulatory efficiency” (Mansell, 2023, p. 145) and temper expectations about what top-down regulation can achieve alone, however ambitious or laudable these moves may be. From below, content moderation practices (including content takedowns) are known to nourish, if not spark, political resistance, potentially invigorating global civil society—albeit often as an unintended consequence (see Alimardani & Elswah, 2021). The fight for visibility among content-creators in India, Indonesia, and Pakistan (Zeng & Kaye, 2022), content moderators (Roberts, 2019) and marginalized groups more generally (Jackson, 2023) shows that “offline” and “online” forces of marginalization fold into, and replicate, one another in a shared ontology. (Online) resistance against moderation and takedown measures therefore yields the potential for the re-constitution of (offline) identities and an attendant expansion of spaces to act, speak and constitute new political identities and collective actions (see West, 2017). If this dynamic exists, then it also applies to classifications of collective actors that do not fight for the same vision of social progress as identified above. As Fitzgerald and Gerrand (2023) and Mattheis and Kingdon (2023) point out, content takedowns and other moderation practices—far from erasing extremist identities on the part of far-right actors—can provide a boon to their collective ability to (self-)present as righteous resistors to the oppressive forces of censorship, while also “gaming” norms of content moderation and takedowns to ensure that the content they wish to push to sympathetic followers ultimately finds a way. The possibility for activist communities to “rage against the machine” (West, 2017) and work to emancipate themselves from the yoke of states/social media control is indeed a powerful, potentially transformative force that will, in addition to top-down measures, surely affect how the future of content moderation continues to shape international politics. The degree to which content takedowns, specifically, affect these processes warrants further inquiry and constitutes one of the central themes of this Special Issue. In closing, though we have couched much of our opening statement on the modern vagaries of moderation, we must give pause to the notion that the dilemmas and possibilities posed by content takedowns are inherently new. As Zhang (2023) argues the (political) regulation of how speech is permitted (or denied) speaks to a longstanding philosophical collision between institutional and governance cultures of democratic control—content takedowns simply speak to its latest frontier. Santini et al. (2023) show that although social media is to the fore in the spread of political misinformation, we cannot overlook how, in the case of Brazil, the nonremoval of problematic content ensures its magnification by the country's more powerful broadcast media. Finally, Watkin (2023) sees through content takedowns a most fundamental dynamic of power, being exploitative labor practices. Focusing on the mental harms caused to content moderators by sifting through, and taking down, terrorist media, she argues that a blueprint for their protection already exists: it simply needs requires to be reconfigured to a modern setting. In closing, there is much to ponder on the veracity of content moderation to either reflect or change the realities that define our fractured political landscape and the array of actors that operate in its spaces. This Special Issue intends to move the disciplinary conversation forward, sparking reflection and, we hope, further conversation on the theory, practice and ethics of content takedowns. The first article by Colten Meisner (2023) highlights the vulnerability of social media creators in the face of mass reporting—a targeted, automated strategy used to trigger content takedowns and account bans. This form of harassment utilizes platform infrastructures for community governance, leaving creators with few avenues of support and access to platform assistance after orchestrated attacks. By conducting interviews with affected creators, this article seeks to understand how content reporting tools can be weaponized, exposing creators to a world of challenges, including barriers to self-expression. The findings are crucial in understanding the weaponization of content take downs to “remove” voices from the public sphere. The impact of algorithmic content moderation practices on marginalized groups, particularly activists, is the emphasis of the second article by Diane Jackson. While previous research has explored the limitations of automated content moderation, this abstract places it in the context of global social movements. It illustrates how marginalized groups experience online oppression similar to their offline marginalization and discusses the ethical and political implications at various levels – individual, organizing, and societal. This article calls for a systemic consideration of the effects of algorithmic content moderation, including takedown measures, on both online and offline activism. Meiqing Zhang delves into the multilevel sources of contention in content removal policies on social media platforms. The author exposes the value conflicts inherent in content moderation, where competing democratic virtues collide. Philosophical debates wrestle with the institutionalization of speech censorship, while governance challenges arise in determining who should be responsible for content guidelines. Furthermore, operational issues surface with existing lexicon-based content deletion technologies that are prone to errors. This article invites us to ponder the clash of democratic values and the confusion surrounding governance in a digital public sphere. It calls for a new social consensus and legitimate processes to establish a mode of online speech governance aligned with democratic principles. The fourth article by Vivian Gerrand and James Fitzgerald juxtaposes conspiracy theories within the wellness and health industry. The article, grounded in political philosophy and inspired by Chantal Mouffe's work, delves into the impact of content takedowns on online community formation and the unintended consequences of takedown policies as a potential accelerant to this process. It focuses on the global wellness industry, using the case of Australian wellness chef Pete Evans to illustrate how content takedown policies can inadvertently foster extremist sentiments in counter-hegemonic discursive spaces. While technology-driven content moderation gains prominence, extremist groups and conspiracy theorists have become adept at manipulating media content and technological affordances to evade regulation. This article by Ashley Mathheis and Ashton Kingdon unveils three primary manipulation tactics—numerology, borderlands, and merchandising—used by extremists online. It transcends ideological boundaries to focus on the tactics themselves, offering case examples from various extremist ideologies. The analysis underscores the importance of understanding how extremists use manipulation strategies to “game” content moderation. It calls for demystification processes to be incorporated into content moderation settings, expanding our understanding of sociotechnical remedial measures. The sixth article by Sean Looney tackles the critical role of Content Delivery Networks (CDNs) in the internet ecosystem and their response to extremist and terrorist content hosted on their servers. Using the example of Cloudflare and Kiwifarms, it highlights the lack of a standardized approach to content moderation across CDNs. The CEO of Cloudflare's reluctance to intervene underscores the ethical dilemma faced by CDNs, while the subsequent actions of CDNs like Diamwall raise questions about the industry's obligations. The article emphasizes the need for clear rules and obligations in the realm of CDN services, particularly in light of the EU's Digital Services Acts 2022. In examining takedown policies of mainstream platforms, Marie Santini, Débora Salles, and Bruno Mattos explore YouTube's recommendation system using Brazil as their case study. Their experiment in understanding the recommendation system sheds light on the systematic powers of platforms. Contrary to the stated aim of reigning in extremist content, their findings demonstrated that YouTube systematically gave preference to Jovem Pan content, Brazil's largest conservative media outlet (akin to Fox News in the United States), and the non-removal of the “toxic” content. This article illustrates how the recommendation algorithm of a mainstream platform magnified the imbalance in the portrayal of political candidates, thereby exposing a stark regulatory asymmetry between traditional broadcast media and online platforms in Brazil. By using a non-Anglophone and Global South case study, the article powerfully demonstrates the intricate dynamics of online content recommendation, and its potential impact on shaping public opinion and discourse, in spheres of regulation that are less well known to international audiences. Moving into the “traditional” sense of content takedowns, the seventh article by Amy Louise Watkin takes a broader perspective by considering their regulatory aspects. It highlights the criticism surrounding existing regulations for the removal of terrorist content from tech platforms, particularly concerning issues of free speech and employee well-being. Drawing inspiration from social regulation approaches in other industries like environmental protection, consumer protection, and occupational health and safety, this article advocates for a new regulatory approach that addresses both content moderation and the safety of content moderators themselves. Delving into states as regulators, Richard McNeill-Wilson and Danielle Flonk focus on the conundrum in which the European Union finds itself, regarding its quest to combat far-right extremism online. Pressure to address this issue has led to the development of a European-wide response. However, this response has been characterized by a delicate balance between policy agreements among member states, resulting in a potentially concerning feature of policymaking. The article highlights the challenges posed by the broadening and loosening of definitions surrounding far-right extremist content. By combining primary sources including policy documents with interviews with EU politicians and practitioners, this article examines the framing and securitization of extremist content regulation over time. It reveals how the securitizing lens of counter-extremism may unintentionally complicate the development of coherent and effective responses to the far right. In understanding the power of content takedown on researchers, Aaron Zelin maps out the challenges and broader consequences of automated content takedown to researchers Zelin (2023). By surveying and interviewing researchers who work on extremist content and integrating their experiences and concerns, this manuscript challenges the binary approach to content takedown. More importantly, using this data provides possible policy solutions to governments and platforms to maximize the efficiency of content takedown on extremist content while minimizing the consequences for researchers. The final article by Reem Ahmed (2023) takes us to Germany, where the Netzwerkdurchsetzungsgesetz (NetzDG) has reshaped the landscape of state-regulated content takedowns. This pioneering act enforces offline legality online, presenting a blueprint for content moderation worldwide. However, concerns have arisen regarding the balance between freedom of expression and law enforcement in content moderation. Through examining NetzDG-related case law and disputed takedowns, this study identifies the main points of contention and underscores the role of judicial decisions in the broader regulatory discourse. It delves into the challenges of identifying illegal content online and the implications for content moderation practices, including content takedown.","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":null,"pages":null},"PeriodicalIF":4.1000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Policy and Internet","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/poi3.375","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0

Abstract

Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021). This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania. The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 2010).2 Consequently, there is no single standard of content moderation that is applied by all tech companies, just as, clearly, there is no international governance of the World Wide Web (Wu, 2015).3 Concept moderation is, therefore, a concept born(e) of multiplicity, accounting for a range of actors that necessarily includes, but is not limited to, tech companies. We are more convinced by the holistic perspective of Gillespie at al. (2020), who define content moderation as: [T]he detection of, assessment of, and interventions taken on content or behavior deemed unacceptable by platforms or other information intermediaries, including the rules they impose, the human labor and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support it (Gillespie at al., 2020, p. 2) Divining a neat definition of content takedowns is a more difficult task for several reasons. First, there does not exist, to our knowledge, an authoritative definition of content takedowns comparable with, say, Kaplan and Haenlein (2010) and Gillespie et al. (2020). Second—and owing to the novelty of this Special Issue—most studies that engage with content takedowns tend to situate their analyses within the remit of content moderation, assuming recognition of “content takedowns” as a conceptual fait accompli (see, e.g., Lakomy, 2023). We note this trend not as a criticism, but an observation. Third, content takedowns have been studied across several academic fields, including legal studies, media studies, sociology and terrorism/extremism studies, entailing a panoply of contending assumptions and disciplinary tendencies—a useful definition of content takedowns pursuant to copyright law (see Bar-Ziv & Elkin-Koren, 2018), for example, does not quite speak to the intended breadth of this Special Issue. With these provisos to hand, we synthesize Gillespie et al. (2020) definition of content moderation with Singh and Bankston's (2018) extensive typology4 to define content takedowns as: The removal of “problematic content” by platforms, or other information intermediaries, pursuant to legal or policy requirements, which occur across categories that include, but are not limited to: Government and legal content demands; copyright requests; trademark requests; network shutdowns and service interruptions; Right to be Forgotten delisting requests and; Community guidelines-based removals. Having established basic parameters, we now turn to provide a commentary on some of the most substantial political dimensions of content moderation and content takedowns, before providing a brief, individual summary of each paper. In 2010, Tarterton Gillespie took aim at social media companies' collective assertion that they offered neutral sites for communication, on which debate—and politics—simply occurred. By gradually replacing self-descriptive terms like “company” and “service” with “platform,” these entities projected an image of neutrality—the “platform” being a “‘raised level surface' designed to facilitate some activity that will subsequently take place” (Gillespie, 2010, p. 350). Pure neutrality is, of course, an illusion and early expressions of political sympathies among these enterprises tended to reach for the assuring hand of free-market capitalism, ideally unbound by government regulation (see Fuchs, 2013). Google and YouTube, for example, positioned themselves as “champions of freedom of expression” (Gillespie, 2010, p. 356) and in response to a takedown request of Jihadi content on YouTube from US Senator Joe Lieberman, the platform qualified its partial fulfillment with a rejoinder that it “encourages free speech and defends everyone's right to express unpopular points of view…allow[ing] our users to view all acceptable content and make up their own minds’” (YouTube Team, 2008 in ibid, p. 356). From the vantage point of 2023, these assertions of neutrality appear misguided at best, or at the very least, naively insulated from the degree to which major social media platforms would come to occupy positions of regulatory power to rival, and in some cases usurp, the traditional roles of states (see Klonick, 2017). Social media companies' application of content moderation policies affords to them great power; yet by essentially “establishing norms of what information and behaviors are censored or promoted on platforms” (González-Bailón & Lelkes, 2023, p. 162) these curated positions do not merely spill down from a raised level surface onto society but are dialectically embedded in, and refract, society. To this extent, early scholarship distinguishing “offline” and “online” ontologies read as if penned in a different world; today, there is a much stronger consensus that the behaviors, policies and identities of social media platforms shape myriad realities and the horizons of possibility that lie therein—be it to protect or to weaken democratic guardrails (Campos Mello, 2020), to mould socialization patterns among teenagers (Bucknell Bossen & Kottasz, 2020) or to accommodate a notable rise in ADHD self-diagnoses (Yeung et al., 2022). The systematic moderation of content, then, is much more than a regulatory sop to the expectations (or legal demands) of states or governing bodies, it is the very means by which social media platforms procedurally generate their identities, mirrored by the (political) cultures that they permit to flourish within their realm. Simply put, content moderation shapes the world(s) in which we live. There is little question that social media companies recognize the power of content moderation as a fulcrum of their identities/brands, despite a longstanding lack of transparency on how and why moderation decisions are made in practice (see Gorwa & Ash, 2020; Looney, 2023). Indeed, one might say that this dynamic lay at the heart of Meta's 2023 launch of Threads: a major new social media platform that sought to build on a massive, pre-existing userbase. Ostensibly a Twitter clone and created in response to Elon Musk's takeover of that platform, Threads was pitched by Meta CEO Mark Zuckerberg as a “friendly place,” his initial posts making clear that a moderated culture of “kindness” would be Threads' “key to success” (Chan, 2023) and that by effectively outsourcing content moderation to its users (Nix, 2023), what one saw on Threads would, per Meta's Global Affairs President, Nick Clegg, “feel meaningful to you.” At Musk's Twitter—re-branded to “X” on 23 July 2023—a different kind of freedom was afforded to its users. Musk dissolved Twitter's Trust and Safety council—tasked with “addressing hate speech, child exploitation, suicide, self-harm and other problems on the platform” (O' Brien & Ortutay, 2022)—5 months into his tenure, reinstated previously banned accounts and re-constituted the platform's image as a bastion of open debate, enacting much looser content moderation standards apparently tweaked to fit Musk's (malleable) dedication to “free speech absolutism” (see Sullivan, 2023). Data suggest that this more “open” approach has, in less than 1 year, resulted in a significant increase in hate speech on the platform (Darcy, 2022), coupled with a boom in conspiracy-facing content (Center on Extremism, 2023) and disinformation at such scale that the European Commission, in September 2023, identified X as “the platform with the largest ratio of mis/disinformation posts” in the year-to-date (European Commission, 2023). The 2023 face-off between Threads and Twitter is an important cultural and political watermark and bears lessons for contemporary scholarship. At its most base, it symbolizes how two of the world's richest men have leaned into content moderation to recycle their personal identities (see Hulsemann, 2023) and to differentiate the ontologies and conversations that might be conjured on their platforms. Both are essentially re-asserting the 2008 refrains of YouTube and Google as idealistic defenders of free speech and purveyors of user power to take (back) control; but that world—and any semblance of plausible deniability—has gone. Too much scholarship has since proven a link between deleterious social media practices and democratic decline, extremism, abuse and misinformation, to take any contemporary claims to neutrality seriously. And so, far removed from Musk and Zuckerberg's apparently playful hints at a Mixed Martial Arts bout in the summer of 2023, Meta and X's loosening of content moderation standards—and the gutting of election integrity teams ahead of a record number of democratic elections in 2024 (Harbath & Khizanishvili, 2023)—speaks to the more serious matter of how content moderation is wedged in a contemporary clinch between democracy and autocracy, with clear consequences for international politics and the array of political actors who stand to be affected. Returning to Gillespie et al. (2020) definition of content moderation, decisions to include or exclude ultimately fall to the platforms, be they aided by AI, human labor or a combination of both (see Gorwa et al., 2020). Yet, for a fuller understanding, we must also consider the range of stakeholders who feed into, and are affected by, these policies. Exerting pressure from above, for example, the EU's Digital Services Act (DSA)—effective as of August 23, 2023—will surely temper the actions of large online platforms as the realities of regulation grind against the libertarian ideals upon which so many of these platforms have been built (Barbrook & Cameron, 1996; Marwick, 2017). Indeed, as Reem Ahmed argues in this Special Issue, Germany's pivotal Network Enforcement Act (NetzDG)—passed in 2017—not only overlaps with the DSA in respect of its legal parameters: its norm-building prowess—and influence on the DSA—marks a plot-point in a common struggle to “rein in Big Tech,” with its adjoining challenge to maintain a workable balance between liberty and security (see Bigo, 2016). This task is rendered difficult, but not impossible, by the comparative absence of state regulation in the United States (see Busch, 2023; Morar & dos Santos, 2020), though court cases brought by state legislators (in Florida and Texas) against tech companies for impeding freedom of speech (Zakrzewski, 2023) highlights that progress on top-down regulation of social media moves not as a monolith but is (also) tempered by bottom-up pressures wrought by civil society. These dueling pressures entail that regulatory momentum on content moderation unfolds slowly, but the passing of landmark legislation in alternative spheres of power, such as Brazil (Tomaz, 2023) and the UK (Satariano, 2023) offer additional markers for a quickening pace. As the regulatory policies of states crystalise into a new frontier of geopolitics, a liberal consensus on content moderation appears to be settling on the joint principles of “human autonomy, dignity and democracy” (Mansell, 2023, p. 145). These values form the basis of the European Commission's definitive goal for social media regulation: to set “an international benchmark for a regulatory approach to online intermediaries” (European Commission, 2022) that explicitly aligns with “the rights and responsibilities of users, intermediary platforms, and public authorities and is based on European values—including the respect of human rights, freedom, democracy, equality and the rule of law.” (European Commission, 2020). Contemporary ruptures in world politics—including the persistent agitation of authoritarian-populism (see Schäfer, 2022)—suggest that the path to this ideal will, at the very least, be fraught with resistance, bringing with it a hardened commitment on the part of extremists to resist sweeping changes that might harm their political projects (McNeil-Wilson & Flonk, 2023), not to mention their commercial interests (see Caplan & Gillespie, 2020). State, intra- or supra-state regulation may offer the most direct promise of meaningful change, but we must beware the “mythical claims about regulatory efficiency” (Mansell, 2023, p. 145) and temper expectations about what top-down regulation can achieve alone, however ambitious or laudable these moves may be. From below, content moderation practices (including content takedowns) are known to nourish, if not spark, political resistance, potentially invigorating global civil society—albeit often as an unintended consequence (see Alimardani & Elswah, 2021). The fight for visibility among content-creators in India, Indonesia, and Pakistan (Zeng & Kaye, 2022), content moderators (Roberts, 2019) and marginalized groups more generally (Jackson, 2023) shows that “offline” and “online” forces of marginalization fold into, and replicate, one another in a shared ontology. (Online) resistance against moderation and takedown measures therefore yields the potential for the re-constitution of (offline) identities and an attendant expansion of spaces to act, speak and constitute new political identities and collective actions (see West, 2017). If this dynamic exists, then it also applies to classifications of collective actors that do not fight for the same vision of social progress as identified above. As Fitzgerald and Gerrand (2023) and Mattheis and Kingdon (2023) point out, content takedowns and other moderation practices—far from erasing extremist identities on the part of far-right actors—can provide a boon to their collective ability to (self-)present as righteous resistors to the oppressive forces of censorship, while also “gaming” norms of content moderation and takedowns to ensure that the content they wish to push to sympathetic followers ultimately finds a way. The possibility for activist communities to “rage against the machine” (West, 2017) and work to emancipate themselves from the yoke of states/social media control is indeed a powerful, potentially transformative force that will, in addition to top-down measures, surely affect how the future of content moderation continues to shape international politics. The degree to which content takedowns, specifically, affect these processes warrants further inquiry and constitutes one of the central themes of this Special Issue. In closing, though we have couched much of our opening statement on the modern vagaries of moderation, we must give pause to the notion that the dilemmas and possibilities posed by content takedowns are inherently new. As Zhang (2023) argues the (political) regulation of how speech is permitted (or denied) speaks to a longstanding philosophical collision between institutional and governance cultures of democratic control—content takedowns simply speak to its latest frontier. Santini et al. (2023) show that although social media is to the fore in the spread of political misinformation, we cannot overlook how, in the case of Brazil, the nonremoval of problematic content ensures its magnification by the country's more powerful broadcast media. Finally, Watkin (2023) sees through content takedowns a most fundamental dynamic of power, being exploitative labor practices. Focusing on the mental harms caused to content moderators by sifting through, and taking down, terrorist media, she argues that a blueprint for their protection already exists: it simply needs requires to be reconfigured to a modern setting. In closing, there is much to ponder on the veracity of content moderation to either reflect or change the realities that define our fractured political landscape and the array of actors that operate in its spaces. This Special Issue intends to move the disciplinary conversation forward, sparking reflection and, we hope, further conversation on the theory, practice and ethics of content takedowns. The first article by Colten Meisner (2023) highlights the vulnerability of social media creators in the face of mass reporting—a targeted, automated strategy used to trigger content takedowns and account bans. This form of harassment utilizes platform infrastructures for community governance, leaving creators with few avenues of support and access to platform assistance after orchestrated attacks. By conducting interviews with affected creators, this article seeks to understand how content reporting tools can be weaponized, exposing creators to a world of challenges, including barriers to self-expression. The findings are crucial in understanding the weaponization of content take downs to “remove” voices from the public sphere. The impact of algorithmic content moderation practices on marginalized groups, particularly activists, is the emphasis of the second article by Diane Jackson. While previous research has explored the limitations of automated content moderation, this abstract places it in the context of global social movements. It illustrates how marginalized groups experience online oppression similar to their offline marginalization and discusses the ethical and political implications at various levels – individual, organizing, and societal. This article calls for a systemic consideration of the effects of algorithmic content moderation, including takedown measures, on both online and offline activism. Meiqing Zhang delves into the multilevel sources of contention in content removal policies on social media platforms. The author exposes the value conflicts inherent in content moderation, where competing democratic virtues collide. Philosophical debates wrestle with the institutionalization of speech censorship, while governance challenges arise in determining who should be responsible for content guidelines. Furthermore, operational issues surface with existing lexicon-based content deletion technologies that are prone to errors. This article invites us to ponder the clash of democratic values and the confusion surrounding governance in a digital public sphere. It calls for a new social consensus and legitimate processes to establish a mode of online speech governance aligned with democratic principles. The fourth article by Vivian Gerrand and James Fitzgerald juxtaposes conspiracy theories within the wellness and health industry. The article, grounded in political philosophy and inspired by Chantal Mouffe's work, delves into the impact of content takedowns on online community formation and the unintended consequences of takedown policies as a potential accelerant to this process. It focuses on the global wellness industry, using the case of Australian wellness chef Pete Evans to illustrate how content takedown policies can inadvertently foster extremist sentiments in counter-hegemonic discursive spaces. While technology-driven content moderation gains prominence, extremist groups and conspiracy theorists have become adept at manipulating media content and technological affordances to evade regulation. This article by Ashley Mathheis and Ashton Kingdon unveils three primary manipulation tactics—numerology, borderlands, and merchandising—used by extremists online. It transcends ideological boundaries to focus on the tactics themselves, offering case examples from various extremist ideologies. The analysis underscores the importance of understanding how extremists use manipulation strategies to “game” content moderation. It calls for demystification processes to be incorporated into content moderation settings, expanding our understanding of sociotechnical remedial measures. The sixth article by Sean Looney tackles the critical role of Content Delivery Networks (CDNs) in the internet ecosystem and their response to extremist and terrorist content hosted on their servers. Using the example of Cloudflare and Kiwifarms, it highlights the lack of a standardized approach to content moderation across CDNs. The CEO of Cloudflare's reluctance to intervene underscores the ethical dilemma faced by CDNs, while the subsequent actions of CDNs like Diamwall raise questions about the industry's obligations. The article emphasizes the need for clear rules and obligations in the realm of CDN services, particularly in light of the EU's Digital Services Acts 2022. In examining takedown policies of mainstream platforms, Marie Santini, Débora Salles, and Bruno Mattos explore YouTube's recommendation system using Brazil as their case study. Their experiment in understanding the recommendation system sheds light on the systematic powers of platforms. Contrary to the stated aim of reigning in extremist content, their findings demonstrated that YouTube systematically gave preference to Jovem Pan content, Brazil's largest conservative media outlet (akin to Fox News in the United States), and the non-removal of the “toxic” content. This article illustrates how the recommendation algorithm of a mainstream platform magnified the imbalance in the portrayal of political candidates, thereby exposing a stark regulatory asymmetry between traditional broadcast media and online platforms in Brazil. By using a non-Anglophone and Global South case study, the article powerfully demonstrates the intricate dynamics of online content recommendation, and its potential impact on shaping public opinion and discourse, in spheres of regulation that are less well known to international audiences. Moving into the “traditional” sense of content takedowns, the seventh article by Amy Louise Watkin takes a broader perspective by considering their regulatory aspects. It highlights the criticism surrounding existing regulations for the removal of terrorist content from tech platforms, particularly concerning issues of free speech and employee well-being. Drawing inspiration from social regulation approaches in other industries like environmental protection, consumer protection, and occupational health and safety, this article advocates for a new regulatory approach that addresses both content moderation and the safety of content moderators themselves. Delving into states as regulators, Richard McNeill-Wilson and Danielle Flonk focus on the conundrum in which the European Union finds itself, regarding its quest to combat far-right extremism online. Pressure to address this issue has led to the development of a European-wide response. However, this response has been characterized by a delicate balance between policy agreements among member states, resulting in a potentially concerning feature of policymaking. The article highlights the challenges posed by the broadening and loosening of definitions surrounding far-right extremist content. By combining primary sources including policy documents with interviews with EU politicians and practitioners, this article examines the framing and securitization of extremist content regulation over time. It reveals how the securitizing lens of counter-extremism may unintentionally complicate the development of coherent and effective responses to the far right. In understanding the power of content takedown on researchers, Aaron Zelin maps out the challenges and broader consequences of automated content takedown to researchers Zelin (2023). By surveying and interviewing researchers who work on extremist content and integrating their experiences and concerns, this manuscript challenges the binary approach to content takedown. More importantly, using this data provides possible policy solutions to governments and platforms to maximize the efficiency of content takedown on extremist content while minimizing the consequences for researchers. The final article by Reem Ahmed (2023) takes us to Germany, where the Netzwerkdurchsetzungsgesetz (NetzDG) has reshaped the landscape of state-regulated content takedowns. This pioneering act enforces offline legality online, presenting a blueprint for content moderation worldwide. However, concerns have arisen regarding the balance between freedom of expression and law enforcement in content moderation. Through examining NetzDG-related case law and disputed takedowns, this study identifies the main points of contention and underscores the role of judicial decisions in the broader regulatory discourse. It delves into the challenges of identifying illegal content online and the implications for content moderation practices, including content takedown.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
特刊:内容下架的(国际)政治:理论、实践、伦理
(2020)使用Singh和Bankston(2018)的广泛类型4对内容审核进行定义,将内容删除定义为:平台或其他信息中介根据法律或政策要求删除“有问题的内容”,这些内容发生在不同类别中,包括但不限于:政府和法律内容需求;版权请求;商标的请求;网络关闭和服务中断;“被遗忘权”删除请求和;基于社区指南的移除。在建立了基本参数之后,我们现在开始对内容审核和内容删除的一些最重要的政治维度进行评论,然后再对每篇论文进行简要的个人总结。2010年,塔尔顿·吉莱斯皮(Tarterton Gillespie)将矛头对准了社交媒体公司的集体主张,即它们提供了中立的交流网站,而辩论和政治只是在这些网站上发生的。通过逐渐用“平台”取代“公司”和“服务”等自我描述的术语,这些实体投射出一种中立的形象——“平台”是“一个‘凸起的平面’,旨在促进随后发生的一些活动”(Gillespie, 2010, p. 350)。当然,纯粹的中立是一种幻觉,这些企业早期表达的政治同情倾向于寻求自由市场资本主义的保证之手,理想情况下不受政府监管的约束(见Fuchs, 2013)。例如,谷歌和YouTube将自己定位为“言论自由的捍卫者”(Gillespie, 2010年,第356页),作为对美国参议员乔·利伯曼要求删除YouTube上圣战内容的回应,该平台以“鼓励言论自由,捍卫每个人表达不受欢迎观点的权利……允许我们的用户观看所有可接受的内容,并做出自己的决定”来证明其部分实现(YouTube Team, 2008年,同上,第356页)。从2023年的有利位置来看,这些中立的主张似乎充其量是被误导了,或者至少是天真地与主要社交媒体平台将在多大程度上占据监管权力的位置,与国家的传统角色竞争,甚至在某些情况下篡夺(见Klonick, 2017)。社交媒体公司对内容审核政策的应用赋予了它们巨大的权力;然而,通过本质上“建立在平台上审查或推广哪些信息和行为的规范”(González-Bailón & Lelkes, 2023,第162页),这些精心策划的立场不仅从一个高层次的表面渗透到社会上,而且辩证地嵌入并折射了社会。在这种程度上,早期学者区分“离线”和“在线”本体读起来就像是写在一个不同的世界;今天,有一个更强烈的共识,即社交媒体平台的行为、政策和身份塑造了无数的现实和可能性的视野,无论是保护还是削弱民主护栏(Campos Mello, 2020),塑造青少年的社交模式(Bucknell Bossen & Kottasz, 2020),还是适应ADHD自我诊断的显着上升(Yeung等人,2022)。因此,对内容的系统性审核,远不止是对国家或管理机构期望(或法律要求)的监管让步,而是社交媒体平台在程序上生成自身身份的一种手段,而这种身份正是社交媒体平台允许在其领域内蓬勃发展的(政治)文化所反映的。简单地说,内容审核塑造了我们生活的世界。毫无疑问,社交媒体公司认识到内容审核的力量是其身份/品牌的一个支点,尽管长期以来在实践中如何以及为什么做出审核决策方面缺乏透明度(见Gorwa & Ash, 2020;鲁尼,2023)。事实上,有人可能会说,这种动态是Meta在2023年推出Threads的核心:一个主要的新型社交媒体平台,试图建立在庞大的、已有的用户基础上。表面上看,Threads是Twitter的克隆版,是为了回应埃隆·马斯克(Elon Musk)对该平台的收购而创建的。Meta首席执行官马克·扎克伯格将Threads定位为一个“友好的地方”,他最初的帖子明确表示,一种温和的“善良”文化将是Threads“成功的关键”(Chan, 2023),而通过有效地将内容审核外包给用户(Nix, 2023), Meta全球事务总裁尼克·克莱格(Nick Clegg)表示,人们在Threads上看到的东西“对你来说是有意义的”。在马斯克的twitter上——在2023年7月23日更名为“X”——用户获得了一种不同的自由。 当代世界政治的破裂——包括威权民粹主义的持续煽动(参见Schäfer, 2022)——表明,通往这一理想的道路至少会充满阻力,带来极端分子的坚定承诺,抵制可能损害其政治项目的全面变革(McNeil-Wilson & Flonk, 2023),更不用说他们的商业利益了(参见Caplan & Gillespie, 2020)。国家、国家内部或国家以外的监管可能会提供最直接的有意义的变革承诺,但我们必须警惕“关于监管效率的神话般的主张”(Mansell, 2023,第145页),并缓和对自上而下的监管可以单独实现的期望,无论这些举措多么雄心勃勃的或值得称赞。从下层来看,内容审核实践(包括内容删除)被认为会滋养(如果不是激发)政治抵抗,潜在地激发全球公民社会——尽管往往是意想不到的后果(见Alimardani & Elswah, 2021)。印度、印度尼西亚和巴基斯坦的内容创作者(Zeng & Kaye, 2022)、内容版主(Roberts, 2019)和更普遍的边缘化群体(Jackson, 2023)之间争夺知名度的斗争表明,“离线”和“在线”边缘化力量在一个共享的本体中相互融合和复制。因此,(在线)对节制和撤下措施的抵制产生了(离线)身份重构的潜力,以及随之而来的行动、言论和构成新政治身份和集体行动的空间扩张(见West, 2017)。如果这种动态存在,那么它也适用于不为上述社会进步愿景而斗争的集体行动者的分类。正如菲茨杰拉德和杰兰德(2023)以及马修斯和金登(2023)指出的那样,内容删除和其他适度的做法——远远不能消除极右翼演员的极端主义身份——可以为他们的集体能力提供一个福音,让他们(自我)表现为对审查制度压迫力量的正义抵抗者,同时也“游戏”了内容审查和删除的规范,以确保他们希望向同情的追随者推送的内容最终找到了一条路。激进分子社区“对机器愤怒”(West, 2017)并努力将自己从国家/社交媒体控制的枷锁中解放出来的可能性确实是一种强大的、潜在的变革力量,除了自上而下的措施外,它肯定会影响内容审核的未来如何继续塑造国际政治。具体而言,内容删除对这些过程的影响程度值得进一步调查,并构成本特刊的中心主题之一。最后,尽管我们在开篇的陈述中对现代的“适度”进行了大量的阐述,但我们必须停下来思考一下,内容删除带来的困境和可能性本质上是新的。正如Zhang(2023)所认为的那样,关于言论如何被允许(或拒绝)的(政治)监管,反映了民主控制的制度文化和治理文化之间长期存在的哲学冲突——内容删除只是反映了它的最新前沿。Santini等人(2023)表明,尽管社交媒体在传播政治错误信息方面处于领先地位,但我们不能忽视,在巴西的情况下,问题内容的不删除确保了它被该国更强大的广播媒体放大。最后,沃特金(2023)通过内容删除看到了一种最基本的权力动态,即剥削劳工的做法。她关注内容审核员通过筛选和删除恐怖主义媒体而造成的精神伤害,她认为保护他们的蓝图已经存在:它只需要重新配置以适应现代环境。最后,关于内容审核的真实性,有很多值得思考的地方,以反映或改变定义我们破碎的政治景观和在其空间中运作的一系列行动者的现实。本期特刊旨在推动学科对话,引发反思,我们希望能就内容删除的理论、实践和道德进行进一步的对话。Colten Meisner(2023)的第一篇文章强调了社交媒体创作者在面对大规模报道时的脆弱性——这是一种有针对性的自动化策略,用于触发内容删除和账户禁令。这种形式的骚扰利用平台基础设施进行社区治理,使创作者在精心策划的攻击之后几乎没有获得支持和平台援助的途径。通过对受影响的创作者进行采访,本文试图了解内容报告工具如何被武器化,将创作者暴露在充满挑战的世界中,包括自我表达的障碍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
8.40
自引率
10.20%
发文量
51
期刊介绍: Understanding public policy in the age of the Internet requires understanding how individuals, organizations, governments and networks behave, and what motivates them in this new environment. Technological innovation and internet-mediated interaction raise both challenges and opportunities for public policy: whether in areas that have received much work already (e.g. digital divides, digital government, and privacy) or newer areas, like regulation of data-intensive technologies and platforms, the rise of precarious labour, and regulatory responses to misinformation and hate speech. We welcome innovative research in areas where the Internet already impacts public policy, where it raises new challenges or dilemmas, or provides opportunities for policy that is smart and equitable. While we welcome perspectives from any academic discipline, we look particularly for insight that can feed into social science disciplines like political science, public administration, economics, sociology, and communication. We welcome articles that introduce methodological innovation, theoretical development, or rigorous data analysis concerning a particular question or problem of public policy.
期刊最新文献
Effects of online citizen participation on legitimacy beliefs in local government. Evidence from a comparative study of online participation platforms in three German municipalities “Highly nuanced policy is very difficult to apply at scale”: Examining researcher account and content takedowns online Special issue: The (international) politics of content takedowns: Theory, practice, ethics Countering online terrorist content: A social regulation approach Content takedowns and activist organizing: Impact of social media content moderation on activists and organizing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1