如果你担心人类,你应该更害怕人类而不是人工智能

IF 1.9 4区 社会学 Q2 INTERNATIONAL RELATIONS Bulletin of the Atomic Scientists Pub Date : 2023-09-03 DOI:10.1080/00963402.2023.2245242
Moran Cerf, Adam Waytz
{"title":"如果你担心人类,你应该更害怕人类而不是人工智能","authors":"Moran Cerf, Adam Waytz","doi":"10.1080/00963402.2023.2245242","DOIUrl":null,"url":null,"abstract":"Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.","PeriodicalId":46802,"journal":{"name":"Bulletin of the Atomic Scientists","volume":"7 1","pages":"0"},"PeriodicalIF":1.9000,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"If you worry about humanity, you should be more scared of humans than of AI\",\"authors\":\"Moran Cerf, Adam Waytz\",\"doi\":\"10.1080/00963402.2023.2245242\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.\",\"PeriodicalId\":46802,\"journal\":{\"name\":\"Bulletin of the Atomic Scientists\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bulletin of the Atomic Scientists\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/00963402.2023.2245242\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INTERNATIONAL RELATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bulletin of the Atomic Scientists","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00963402.2023.2245242","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)的进步引起了公众的广泛关注,他们担心这项技术可能会助长错误信息、算法偏见和网络安全漏洞的传播,并对人类的生存构成潜在威胁。我们认为,尽管这些威胁既真实又重要,但对人工智能危害的高度关注已经分散了人类在延续这些危害方面的巨大作用。我们建议,有必要根据人工智能相对于人类的风险,重新调整判断人工智能危险的标准。此外,我们认为,如果有的话,人工智能可以帮助人类做出旨在改善社会平等、安全、生产力和减轻一些生存威胁的决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
If you worry about humanity, you should be more scared of humans than of AI
Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
54
期刊最新文献
Where climate journalism is now: Interview with Emily Atkin, the fire behind the Heated climate newsletter Climate anxiety is not a mental health problem. But we should still treat it as one Book excerpt—Catastrophic climate change: Lessons from the dinosaurs Introduction: Climate change—where are we now? Redefining the wildfire problem and scaling solutions to meet the challenge
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1