TESCREAL 捆绑软件:优生学和通过人工通用智能实现乌托邦的承诺

Q2 Computer Science First Monday Pub Date : 2024-04-14 DOI:10.5210/fm.v29i4.13636
Timnit Gebru, Émile P. Torres
{"title":"TESCREAL 捆绑软件:优生学和通过人工通用智能实现乌托邦的承诺","authors":"Timnit Gebru, Émile P. Torres","doi":"10.5210/fm.v29i4.13636","DOIUrl":null,"url":null,"abstract":"The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence\",\"authors\":\"Timnit Gebru, Émile P. Torres\",\"doi\":\"10.5210/fm.v29i4.13636\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.\",\"PeriodicalId\":38833,\"journal\":{\"name\":\"First Monday\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"First Monday\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5210/fm.v29i4.13636\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"First Monday","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5210/fm.v29i4.13636","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)领域许多组织的既定目标是开发人工通用智能(AGI),这是一种想象中的系统,比我们所见过的任何东西都更智能。研究人员并没有认真质疑这样的系统是否能够、是否应该建立,而是致力于创造 "对全人类有益 "的 "安全 AGI"。我们认为,与可以按照标准工程原则进行评估的具有特定应用的系统不同,像 "AGI "这样未定义的系统无法进行适当的安全测试。那么,为什么在人工智能领域,构建 AGI 往往被视为一个毋庸置疑的目标呢?在本文中,我们认为激励这一目标的规范框架植根于二十世纪的英美优生学传统。因此,过去激励优生学家的许多歧视性态度(如种族主义、仇外心理、阶级歧视、能力歧视和性别歧视)在建立人工智能的运动中仍然普遍存在,导致系统伤害边缘化群体并集中权力,同时使用 "安全 "和 "造福人类 "的语言来逃避责任。最后,我们敦促研究人员在我们可以制定安全协议的明确任务上下功夫,而不是试图建立一个假定无所不知的系统,如 AGI。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
First Monday
First Monday Computer Science-Computer Networks and Communications
CiteScore
2.20
自引率
0.00%
发文量
86
期刊介绍: First Monday is one of the first openly accessible, peer–reviewed journals on the Internet, solely devoted to the Internet. Since its start in May 1996, First Monday has published 1,035 papers in 164 issues; these papers were written by 1,316 different authors. In addition, eight special issues have appeared. The most recent special issue was entitled A Web site with a view — The Third World on First Monday and it was edited by Eduardo Villanueva Mansilla. First Monday is indexed in Communication Abstracts, Computer & Communications Security Abstracts, DoIS, eGranary Digital Library, INSPEC, Information Science & Technology Abstracts, LISA, PAIS, and other services.
期刊最新文献
French-speaking photo models communication: A comparison across platforms and profiles, a possible evolution Angry sharing: Exploring the influence of Facebook reactions on political post sharing Everyday positivity: An appraisal analysis of online identity in food blogs Tweeting on thin ice: Scientists in dialogic climate change communication with the public Education runs quickly violence runs slowly: An analysis of closed captioning speed and reading level in children’s television franchises
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1