首页 > 最新文献

First Monday最新文献

英文 中文
Undersea cables in Africa: The new frontiers of digital colonialism 非洲的海底电缆:数字殖民主义的新领域
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13637
Esther Mwema, Abeba Birhane
The Internet has become the backbone of the social fabric. The United Nations Human Rights Council declared access to the Internet a fundamental human right over a decade ago. Yet, Africa remains the region with the widest Digital Divide where most of the population is either sparsely connected or has no access to the Internet. This has in turn created a race amongst Western big tech corporations scrambling to “bridge the Digital Divide”. Although the Internet is often portrayed as something that resides in the “cloud”, it heavily depends on physical infrastructure, including undersea cables. In this paper, we examine how current undersea cable projects and Internet infrastructure, owned, controlled, and managed by private Western big tech corporation, often using the “bridging the Digital Divide” rhetoric, not only replicates colonial logic but also follows the same infrastructural path laid during the trans-Atlantic slave trade era. Despite its significant impact on the continent’s digital infrastructure, we find publicly available information is scarce and undersea cable projects are carried out with no oversight and little transparency. We review historical evolution of the Internet, and detail and track the development of undersea cables in Africa, and illustrate its tight connection with colonial legacies. We provide an in-depth analysis of two current major undersea cable undertakings across the continent: Google’s Equiano and Meta’s 2Africa. Using Google and Meta’s undersea cables as case studies, we illustrate how these projects follow colonial logic, create a new cost model that keep African nations under perpetual debt, and serve as infrastructure for mass data harvesting while bringing little benefit to the Global South. We conclude with actionable recommendations for and demands from big tech corporations, regulatory bodies, and governments across the African continent.
互联网已成为社会结构的支柱。联合国人权理事会早在十多年前就宣布上网是一项基本人权。然而,非洲仍然是 "数字鸿沟"(Digital Divide)最严重的地区,大部分人口要么网络连接稀少,要么无法访问互联网。这反过来又在西方大科技公司之间掀起了一场争相 "弥合数字鸿沟 "的竞赛。虽然互联网经常被描绘成 "云 "中之物,但它在很大程度上依赖于包括海底电缆在内的实体基础设施。在本文中,我们将探讨当前的海底电缆项目和互联网基础设施是如何由西方私营大科技公司拥有、控制和管理的,并经常使用 "弥合数字鸿沟 "的说辞,这不仅复制了殖民逻辑,而且还沿袭了跨大西洋奴隶贸易时代所铺设的相同的基础设施道路。尽管海底光缆对非洲大陆的数字基础设施产生了重大影响,但我们发现公开信息很少,而且海底光缆项目的实施缺乏监督,透明度很低。我们回顾了互联网的历史演变,详细介绍并追踪了非洲海底电缆的发展,并说明了其与殖民遗产的紧密联系。我们深入分析了目前横跨非洲大陆的两大海底电缆项目:谷歌的 Equiano 和 Meta 的 2Africa。以谷歌和 Meta 的海底电缆为案例,我们说明了这些项目是如何遵循殖民逻辑,创造出一种新的成本模式,使非洲国家长期背负债务,并成为大规模数据采集的基础设施,而给全球南方国家带来的好处却微乎其微。最后,我们向非洲大陆的大型科技公司、监管机构和政府提出了可行的建议和要求。
{"title":"Undersea cables in Africa: The new frontiers of digital colonialism","authors":"Esther Mwema, Abeba Birhane","doi":"10.5210/fm.v29i4.13637","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13637","url":null,"abstract":"The Internet has become the backbone of the social fabric. The United Nations Human Rights Council declared access to the Internet a fundamental human right over a decade ago. Yet, Africa remains the region with the widest Digital Divide where most of the population is either sparsely connected or has no access to the Internet. This has in turn created a race amongst Western big tech corporations scrambling to “bridge the Digital Divide”. Although the Internet is often portrayed as something that resides in the “cloud”, it heavily depends on physical infrastructure, including undersea cables. In this paper, we examine how current undersea cable projects and Internet infrastructure, owned, controlled, and managed by private Western big tech corporation, often using the “bridging the Digital Divide” rhetoric, not only replicates colonial logic but also follows the same infrastructural path laid during the trans-Atlantic slave trade era. Despite its significant impact on the continent’s digital infrastructure, we find publicly available information is scarce and undersea cable projects are carried out with no oversight and little transparency. We review historical evolution of the Internet, and detail and track the development of undersea cables in Africa, and illustrate its tight connection with colonial legacies. We provide an in-depth analysis of two current major undersea cable undertakings across the continent: Google’s Equiano and Meta’s 2Africa. Using Google and Meta’s undersea cables as case studies, we illustrate how these projects follow colonial logic, create a new cost model that keep African nations under perpetual debt, and serve as infrastructure for mass data harvesting while bringing little benefit to the Global South. We conclude with actionable recommendations for and demands from big tech corporations, regulatory bodies, and governments across the African continent.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140707050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Participation versus scale: Tensions in the practical demands on participatory AI 参与与规模:参与式人工智能实际需求中的紧张关系
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13642
Margaret Young, Upol Ehsan, Ranjit Singh, Emnet Tafesse, Michele Gilman, Christina Harrington, Jacob Metcalf
Ongoing calls from academic and civil society groups and regulatory demands for the central role of affected communities in development, evaluation, and deployment of artificial intelligence systems have created the conditions for an incipient “participatory turn” in AI. This turn encompasses a wide number of approaches — from legal requirements for consultation with civil society groups and community input in impact assessments, to methods for inclusive data labeling and co-design. However, more work remains in adapting the methods of participation to the scale of commercial AI. In this paper, we highlight the tensions between the localized engagement of community-based participatory methods, and the globalized operation of commercial AI systems. Namely, the scales of commercial AI and participatory methods tend to differ along the fault lines of (1) centralized to distributed development; (2) calculable to self-identified publics; and (3) instrumental to intrinsic perceptions of the value of public input. However, a close look at these differences in scale demonstrates that these tensions are not irresolvable but contingent. We note that beyond its reference to the size of any given system, scale serves as a measure of the infrastructural investments needed to extend a system across contexts. To scale for a more participatory AI, we argue that these same tensions become opportunities for intervention by offering case studies that illustrate how infrastructural investments have supported participation in AI design and governance. Just as scaling commercial AI has required significant investments, we argue that scaling participation accordingly will require the creation of infrastructure dedicated to the practical dimension of achieving the participatory tradition’s commitment to shifting power.
学术界和民间社会团体不断呼吁,监管部门也要求受影响社区在人工智能系统的开发、评估和部署中发挥核心作用,这为人工智能的 "参与性转向 "创造了条件。这一转向包含多种方法--从与民间社会团体协商和社区参与影响评估的法律要求,到包容性数据标签和共同设计的方法。然而,要使参与方法适应商业人工智能的规模,还有更多工作要做。在本文中,我们强调了基于社区的参与方法的本地化参与与商业人工智能系统的全球化运作之间的矛盾。也就是说,商业人工智能与参与式方法的规模往往在以下方面存在差异:(1) 集中式开发与分布式开发;(2) 可计算的公众与自我认同的公众;(3) 对公众意见价值的工具性认识与内在性认识。然而,仔细研究这些规模上的差异就会发现,这些矛盾并非不可解决,而是有其偶然性。我们注意到,规模除了指任何特定系统的规模之外,还可作为一种衡量标准,用来衡量将一个系统扩展到不同环境所需的基础设施投资。为了扩大更具参与性的人工智能的规模,我们认为,通过提供案例研究,说明基础设施投资是如何支持人工智能设计和管理中的参与的,这些同样的紧张关系将成为干预的机会。正如扩大商业人工智能的规模需要大量投资一样,我们认为,相应地扩大参与度也需要建立基础设施,专门用于实现参与式传统的权力转移承诺的实际层面。
{"title":"Participation versus scale: Tensions in the practical demands on participatory AI","authors":"Margaret Young, Upol Ehsan, Ranjit Singh, Emnet Tafesse, Michele Gilman, Christina Harrington, Jacob Metcalf","doi":"10.5210/fm.v29i4.13642","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13642","url":null,"abstract":"Ongoing calls from academic and civil society groups and regulatory demands for the central role of affected communities in development, evaluation, and deployment of artificial intelligence systems have created the conditions for an incipient “participatory turn” in AI. This turn encompasses a wide number of approaches — from legal requirements for consultation with civil society groups and community input in impact assessments, to methods for inclusive data labeling and co-design. However, more work remains in adapting the methods of participation to the scale of commercial AI. In this paper, we highlight the tensions between the localized engagement of community-based participatory methods, and the globalized operation of commercial AI systems. Namely, the scales of commercial AI and participatory methods tend to differ along the fault lines of (1) centralized to distributed development; (2) calculable to self-identified publics; and (3) instrumental to intrinsic perceptions of the value of public input. However, a close look at these differences in scale demonstrates that these tensions are not irresolvable but contingent. We note that beyond its reference to the size of any given system, scale serves as a measure of the infrastructural investments needed to extend a system across contexts. To scale for a more participatory AI, we argue that these same tensions become opportunities for intervention by offering case studies that illustrate how infrastructural investments have supported participation in AI design and governance. Just as scaling commercial AI has required significant investments, we argue that scaling participation accordingly will require the creation of infrastructure dedicated to the practical dimension of achieving the participatory tradition’s commitment to shifting power.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence TESCREAL 捆绑软件:优生学和通过人工通用智能实现乌托邦的承诺
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13636
Timnit Gebru, Émile P. Torres
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
人工智能(AI)领域许多组织的既定目标是开发人工通用智能(AGI),这是一种想象中的系统,比我们所见过的任何东西都更智能。研究人员并没有认真质疑这样的系统是否能够、是否应该建立,而是致力于创造 "对全人类有益 "的 "安全 AGI"。我们认为,与可以按照标准工程原则进行评估的具有特定应用的系统不同,像 "AGI "这样未定义的系统无法进行适当的安全测试。那么,为什么在人工智能领域,构建 AGI 往往被视为一个毋庸置疑的目标呢?在本文中,我们认为激励这一目标的规范框架植根于二十世纪的英美优生学传统。因此,过去激励优生学家的许多歧视性态度(如种族主义、仇外心理、阶级歧视、能力歧视和性别歧视)在建立人工智能的运动中仍然普遍存在,导致系统伤害边缘化群体并集中权力,同时使用 "安全 "和 "造福人类 "的语言来逃避责任。最后,我们敦促研究人员在我们可以制定安全协议的明确任务上下功夫,而不是试图建立一个假定无所不知的系统,如 AGI。
{"title":"The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence","authors":"Timnit Gebru, Émile P. Torres","doi":"10.5210/fm.v29i4.13636","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13636","url":null,"abstract":"The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction for the special issue of “Ideologies of AI and the consolidation of power”: Naming power 人工智能意识形态与权力巩固 "特刊导言:为权力正名
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13643
Jenna Burrell, Jacob Metcalf
This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI.
这篇为《第一星期一》特刊撰写的介绍性文章 "人工智能的意识形态与权力的巩固",探讨了权力在人工智能和机器学习的研究与出版中是如何运作的。根据本特刊七篇文章的主题,我们认为,主流计算机科学出版物中能说什么,不能说什么,似乎都受制于一小撮实业家的权力、财富和意识形态。其结果是,塑造有关人工智能产业的话语本身就是一种无法在计算机科学领域命名的权力形式。我们认为,为了人工智能领域的完整性和生活受到人工智能影响的人们的福祉,有必要对这种权力以及追求通用人工智能背后的核心承诺的混乱历史进行命名和处理。
{"title":"Introduction for the special issue of “Ideologies of AI and the consolidation of power”: Naming power","authors":"Jenna Burrell, Jacob Metcalf","doi":"10.5210/fm.v29i4.13643","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13643","url":null,"abstract":"This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated decision-making as domination 作为支配的自动决策
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13630
Jenna Burrell
Machine learning ethics research is demonstrably skewed. Work that defines fairness as a matter of distribution or allocation and that proposes computationally tractable definitions of fairness has been overproduced and overpublished. This paper takes a sociological approach to explain how subtle processes of social-reproduction within the field of computer science partially explains this outcome. Arguing that allocative fairness is inherently limited as a definition of justice, I point to how researchers in this area can make broader use of the intellectual insights from political philosophy, philosophy of knowledge, and feminist and critical race theories. I argue that a definition of injustice not as allocative unfairness but as domination, drawing primarily from the argument of philosopher Iris Marion Young, would better explain observations of algorithmic harm that are widely acknowledged in this research community. This alternate definition expands the solution space for algorithmic justice to include other forms of consequential action beyond code fixes, such as legislation, participatory assessments, forms of user repurposing and resistance, and activism that leads to bans on certain uses of technology.
机器学习伦理学研究明显存在偏差。将公平定义为分配或分配问题的研究,以及提出可计算的公平定义的研究,都被过度生产和过度发表。本文从社会学角度出发,解释了计算机科学领域内微妙的社会生产过程如何部分地解释了这一结果。我认为分配公平作为正义的定义具有内在局限性,并指出该领域的研究人员可以如何更广泛地利用政治哲学、知识哲学以及女权主义和批判性种族理论的知识见解。我认为,主要借鉴哲学家伊里斯-马里恩-扬(Iris Marion Young)的论点,不公正的定义不是分配不公,而是支配,这样就能更好地解释研究界广泛认可的算法伤害现象。这一替代定义扩大了算法正义的解决空间,使其包括代码修复之外的其他形式的后果性行动,如立法、参与性评估、用户再利用和抵制形式,以及导致禁止某些技术用途的激进主义。
{"title":"Automated decision-making as domination","authors":"Jenna Burrell","doi":"10.5210/fm.v29i4.13630","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13630","url":null,"abstract":"Machine learning ethics research is demonstrably skewed. Work that defines fairness as a matter of distribution or allocation and that proposes computationally tractable definitions of fairness has been overproduced and overpublished. This paper takes a sociological approach to explain how subtle processes of social-reproduction within the field of computer science partially explains this outcome. Arguing that allocative fairness is inherently limited as a definition of justice, I point to how researchers in this area can make broader use of the intellectual insights from political philosophy, philosophy of knowledge, and feminist and critical race theories. I argue that a definition of injustice not as allocative unfairness but as domination, drawing primarily from the argument of philosopher Iris Marion Young, would better explain observations of algorithmic harm that are widely acknowledged in this research community. This alternate definition expands the solution space for algorithmic justice to include other forms of consequential action beyond code fixes, such as legislation, participatory assessments, forms of user repurposing and resistance, and activism that leads to bans on certain uses of technology.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140705621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-building and the epistemic culture of AI safety 实地建设与人工智能安全的认识论文化
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13626
Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang
The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.
新兴的 "人工智能安全 "领域吸引了公众的关注和大量资金的注入,以支持其隐含的承诺:在部署先进人工智能(AI)的同时降低其最严重的风险。有效利他主义、长期主义和生存风险研究的理念是这一新领域的基础。在本文中,我们认为对这些思想感兴趣的重叠群体已经合并成了我们所说的更广泛的 "人工智能安全认识论群体",该群体通过相互促进的群体建设和知识生产实践得以维持。我们通过分析该社区认识论文化的四个核心点来支持这一论断:1) 通过网络论坛和职业咨询建立在线社区;2) 人工智能预测;3) 人工智能安全研究;以及 4) 有奖竞赛。这个认识论社区的成员分散在科技行业、学术界和政策组织中,这确保了他们能够持续参与有关人工智能的全球讨论。了解融合了他们的道德信念和知识主张的认识论文化对于评估这些主张至关重要,因为这些主张在关于人工智能的危害以及如何减轻危害的关键性、快速变化的辩论中的影响力越来越大。
{"title":"Field-building and the epistemic culture of AI safety","authors":"Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang","doi":"10.5210/fm.v29i4.13626","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13626","url":null,"abstract":"The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Debunking robot rights metaphysically, ethically, and legally 从形而上学、伦理和法律角度驳斥机器人权利
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13628
Abeba Birhane, J. V. Dijk, Frank Pasquale
In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that may be denied or granted rights. Building on theories of phenomenology and post-Cartesian approaches to cognitive science, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled, digitized, and surveilled society. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, a highly controversial concept whose most important effect has been the undermining of worker, consumer, and voter rights by advancing the power of capital to exercise outsized influence on politics and law. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists and futurists to fantasize about benevolently sentient machines with unalterable needs and desires protected by law. While such fantasies have motivated fascinating fiction and art, once they influence legal theory and practice articulating the scope of rights claims, they threaten to immunize from legal accountability the current AI and robotics that is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering.
在这项工作中,我们从形而上学、伦理和法律角度对机器人权利的论点提出质疑。从形而上学的角度来说,我们认为机器不是那种可以被剥夺或赋予权利的东西。在现象学理论和后笛卡尔认知科学方法的基础上,我们将我们的立场建立在现实人类的生活现实中,即在一个日益普遍连接、控制、数字化和监视的社会中。在伦理方面,我们认为,鉴于机器对社会中最边缘化群体当前和潜在的伤害,对机器的限制(而非机器的权利)应成为当前人工智能伦理辩论的核心。从法律的角度来看,机器人权利的最佳类比不是人权,而是公司权利,这是一个极具争议的概念,其最重要的影响是通过提升资本对政治和法律的影响力,损害了工人、消费者和选民的权利。我们的结论是,机器人权利的概念就像一个烟雾弹,让理论家和未来学家们幻想出仁慈的有生命的机器,它们有着不可改变的需求和欲望,并受到法律的保护。虽然这种幻想激发了引人入胜的小说和艺术,但一旦它们影响了阐明权利主张范围的法律理论和实践,就有可能使当前的人工智能和机器人技术免于法律责任,而这些技术正在助长监视资本主义,加速环境破坏,加剧不公正和人类痛苦。
{"title":"Debunking robot rights metaphysically, ethically, and legally","authors":"Abeba Birhane, J. V. Dijk, Frank Pasquale","doi":"10.5210/fm.v29i4.13628","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13628","url":null,"abstract":"In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that may be denied or granted rights. Building on theories of phenomenology and post-Cartesian approaches to cognitive science, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled, digitized, and surveilled society. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, a highly controversial concept whose most important effect has been the undermining of worker, consumer, and voter rights by advancing the power of capital to exercise outsized influence on politics and law. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists and futurists to fantasize about benevolently sentient machines with unalterable needs and desires protected by law. While such fantasies have motivated fascinating fiction and art, once they influence legal theory and practice articulating the scope of rights claims, they threaten to immunize from legal accountability the current AI and robotics that is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140705952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opaque algorithms, transparent biases: Automated content moderation during the Sheikh Jarrah Crisis 不透明的算法,透明的偏见:谢赫贾拉赫危机期间的自动内容审核
Q2 Computer Science Pub Date : 2024-04-14 DOI: 10.5210/fm.v29i4.13620
Norah Abokhodair, Yarden Skop, Sarah Rüller, Konstantin Aal, Houda Elmimouni
Social media platforms, while influential tools for human rights activism, free speech, and mobilization, also bear the influence of corporate ownership and commercial interests. This dual character can lead to clashing interests in the operations of these platforms. This study centers on the May 2021 Sheikh Jarrah events in East Jerusalem, a focal point in the Israeli-Palestinian conflict that garnered global attention. During this period, Palestinian activists and their allies observed and encountered a notable increase in automated content moderation actions, like shadow banning and content removal. We surveyed 201 users who faced content moderation and conducted 12 interviews with political influencers to assess the impact of these practices on activism. Our analysis centers on automated content moderation and transparency, investigating how users and activists perceive the content moderation systems employed by social media platforms, and their opacity. Findings reveal perceived censorship by pro-Palestinian activists due to opaque and obfuscated technological mechanisms of content demotion, complicating harm substantiation and lack of redress mechanisms. We view this difficulty as part of algorithmic harms, in the realm of automated content moderation. This dynamic has far-reaching implications for activism’s future and it raises questions about power centralization in digital spaces.
社交媒体平台虽然是人权活动、言论自由和动员的重要工具,但也受到企业所有权和商业利益的影响。这种双重特性会导致这些平台在运营过程中出现利益冲突。本研究以 2021 年 5 月在东耶路撒冷发生的 Sheikh Jarrah 事件为中心,该事件是全球瞩目的巴以冲突焦点。在此期间,巴勒斯坦活动家及其盟友观察到并遭遇了明显增加的自动内容审核行为,如影子封禁和内容删除。我们对 201 名面临内容审核的用户进行了调查,并对 12 名具有政治影响力的人士进行了访谈,以评估这些做法对激进主义的影响。我们的分析以自动内容审核和透明度为中心,调查用户和活动家如何看待社交媒体平台采用的内容审核系统及其不透明性。研究结果表明,由于内容降级技术机制的不透明和模糊性,亲巴勒斯坦活动家认为自己受到了审查,这使得危害证实和补救机制的缺乏变得更加复杂。我们将这一困难视为算法伤害的一部分,属于自动内容审核的范畴。这种态势对激进主义的未来有着深远的影响,并提出了数字空间权力集中化的问题。
{"title":"Opaque algorithms, transparent biases: Automated content moderation during the Sheikh Jarrah Crisis","authors":"Norah Abokhodair, Yarden Skop, Sarah Rüller, Konstantin Aal, Houda Elmimouni","doi":"10.5210/fm.v29i4.13620","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13620","url":null,"abstract":"Social media platforms, while influential tools for human rights activism, free speech, and mobilization, also bear the influence of corporate ownership and commercial interests. This dual character can lead to clashing interests in the operations of these platforms. This study centers on the May 2021 Sheikh Jarrah events in East Jerusalem, a focal point in the Israeli-Palestinian conflict that garnered global attention. During this period, Palestinian activists and their allies observed and encountered a notable increase in automated content moderation actions, like shadow banning and content removal. We surveyed 201 users who faced content moderation and conducted 12 interviews with political influencers to assess the impact of these practices on activism. Our analysis centers on automated content moderation and transparency, investigating how users and activists perceive the content moderation systems employed by social media platforms, and their opacity. Findings reveal perceived censorship by pro-Palestinian activists due to opaque and obfuscated technological mechanisms of content demotion, complicating harm substantiation and lack of redress mechanisms. We view this difficulty as part of algorithmic harms, in the realm of automated content moderation. This dynamic has far-reaching implications for activism’s future and it raises questions about power centralization in digital spaces.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Societal implications of quantum technologies through a technocriticism of quantum key distribution 通过对量子密钥分配的技术批判看量子技术的社会影响
Q2 Computer Science Pub Date : 2024-03-09 DOI: 10.5210/fm.v29i3.13571
Sarah Young, Catherine Brooks, J. Pridmore
Advancement in quantum networking is becoming increasingly more sophisticated, with some arguing that a working quantum network may be reached by 2030. Just how these networks can and will come to be is still a work in progress, including how communications within those networks will be secured. While debates about the development of quantum networking often focus on technical specifications, less is written about their social impacts and the myriad of ways individuals can engage in conversations about quantum technologies, especially in non-technical ways. Spaces for legal, humanist or behavioral scholars to weigh in on the impacts of this emerging capability do exist, and using the example of criticism of the quantum protocol quantum key distribution (QKD), this paper illustrates five entry points for non-technical experts to help technical, practical, and scholarly communities prepare for the anticipated quantum revolution. Selecting QKD as an area of critique was chosen due to its established position as an application of quantum properties that reaches beyond theoretical applications.
量子网络的发展正变得越来越复杂,有人认为,到 2030 年可能会实现工作量子网络。至于这些网络如何能够实现以及将会如何实现,包括如何确保这些网络内的通信安全,仍是一项正在进行中的工作。有关量子网络发展的讨论往往集中在技术规格上,而对其社会影响以及个人参与量子技术对话的多种方式,尤其是非技术方式的讨论却较少提及。本文以对量子协议量子密钥分发(QKD)的批评为例,说明了非技术专家帮助技术、实践和学术界为预期的量子革命做好准备的五个切入点。选择 QKD 作为批评领域,是因为它作为量子特性应用的既定地位已经超越了理论应用。
{"title":"Societal implications of quantum technologies through a technocriticism of quantum key distribution","authors":"Sarah Young, Catherine Brooks, J. Pridmore","doi":"10.5210/fm.v29i3.13571","DOIUrl":"https://doi.org/10.5210/fm.v29i3.13571","url":null,"abstract":"Advancement in quantum networking is becoming increasingly more sophisticated, with some arguing that a working quantum network may be reached by 2030. Just how these networks can and will come to be is still a work in progress, including how communications within those networks will be secured. While debates about the development of quantum networking often focus on technical specifications, less is written about their social impacts and the myriad of ways individuals can engage in conversations about quantum technologies, especially in non-technical ways. Spaces for legal, humanist or behavioral scholars to weigh in on the impacts of this emerging capability do exist, and using the example of criticism of the quantum protocol quantum key distribution (QKD), this paper illustrates five entry points for non-technical experts to help technical, practical, and scholarly communities prepare for the anticipated quantum revolution. Selecting QKD as an area of critique was chosen due to its established position as an application of quantum properties that reaches beyond theoretical applications.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Instagram as a narrative platform 作为叙事平台的 Instagram
Q2 Computer Science Pub Date : 2024-03-09 DOI: 10.5210/fm.v29i3.12497
Mariya Kozharinova, Lev Manovich
Even though Instagram has been the subject of numerous studies, none of them have systematically investigated its potential as a narrative medium. This article argues that Instagram’s narrative capabilities are comparable to those of literature and film. To support our claims, we analyze a number of prominent female Instagram creators and demonstrate how they employ the platform’s diverse features, functionalities, and interface to create multi-year biographical narratives. Furthermore, we discuss the applicability of theories developed in literary and film studies in analyzing Instagram’s narrative capabilities. By employing Bakhtin’s influential chronotope concept, we examine in depth how these narratives make specific use of space and time. Additionally, we compare time construction in film and Instagram narratives using the cinema studies’ theory of narrative time in movies.
尽管 Instagram 已成为众多研究的主题,但没有一项研究对其作为叙事媒介的潜力进行过系统的调查。本文认为,Instagram 的叙事能力可与文学和电影相媲美。为了支持我们的观点,我们分析了 Instagram 上一些杰出的女性创作者,并展示了她们如何利用该平台的各种特点、功能和界面来创作多年的传记叙事。此外,我们还讨论了文学和电影研究理论在分析 Instagram 的叙事能力方面的适用性。通过运用巴赫金极具影响力的时序(chronotope)概念,我们深入研究了这些叙事是如何具体利用空间和时间的。此外,我们还利用电影研究中的电影叙事时间理论,比较了电影和 Instagram 叙事中的时间建构。
{"title":"Instagram as a narrative platform","authors":"Mariya Kozharinova, Lev Manovich","doi":"10.5210/fm.v29i3.12497","DOIUrl":"https://doi.org/10.5210/fm.v29i3.12497","url":null,"abstract":"Even though Instagram has been the subject of numerous studies, none of them have systematically investigated its potential as a narrative medium. This article argues that Instagram’s narrative capabilities are comparable to those of literature and film. To support our claims, we analyze a number of prominent female Instagram creators and demonstrate how they employ the platform’s diverse features, functionalities, and interface to create multi-year biographical narratives. Furthermore, we discuss the applicability of theories developed in literary and film studies in analyzing Instagram’s narrative capabilities. By employing Bakhtin’s influential chronotope concept, we examine in depth how these narratives make specific use of space and time. Additionally, we compare time construction in film and Instagram narratives using the cinema studies’ theory of narrative time in movies.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
First Monday
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1