The harms of terminology: why we should reject so-called “frontier AI”

Gina Helfrich
{"title":"The harms of terminology: why we should reject so-called “frontier AI”","authors":"Gina Helfrich","doi":"10.1007/s43681-024-00438-1","DOIUrl":null,"url":null,"abstract":"<div><p>In the mid-2023, promoters of artificial intelligence (AI) as an “existential risk” coined a new term, “frontier AI,” that refers to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” Promoters of this new term were able to disseminate it via the United Kingdom (UK) government’s Frontier AI Taskforce (formerly the Foundation Models Taskforce) as well as the UK’s AI Safety Summit, held in November 2023.</p><p>I argue that adoption of the term “frontier AI” is harmful and contributes to AI hype. Promoting this new term is a way for its boosters to focus the public conversation around the AI-related risks they think are most important, namely “existential risk”—a scenario in which AI is able to bring about the destruction of humanity. Simultaneously, “frontier AI” is a re-branding exercise for the large-scale generative machine learning (ML) models that have been shown to cause severe and pervasive harms (including psychological, social, and environmental harms). Unlike “existential risk,” these harms are actual rather than theoretical, whereas the term “frontier AI” moves our collective focus away from actual harms to focus on hypothetical doomsday scenarios.</p><p>Moreover, “frontier AI” as a term invokes the colonial mindset, further reinscribing the harmful dynamics between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 3","pages":"699 - 705"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00438-1.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00438-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the mid-2023, promoters of artificial intelligence (AI) as an “existential risk” coined a new term, “frontier AI,” that refers to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” Promoters of this new term were able to disseminate it via the United Kingdom (UK) government’s Frontier AI Taskforce (formerly the Foundation Models Taskforce) as well as the UK’s AI Safety Summit, held in November 2023.

I argue that adoption of the term “frontier AI” is harmful and contributes to AI hype. Promoting this new term is a way for its boosters to focus the public conversation around the AI-related risks they think are most important, namely “existential risk”—a scenario in which AI is able to bring about the destruction of humanity. Simultaneously, “frontier AI” is a re-branding exercise for the large-scale generative machine learning (ML) models that have been shown to cause severe and pervasive harms (including psychological, social, and environmental harms). Unlike “existential risk,” these harms are actual rather than theoretical, whereas the term “frontier AI” moves our collective focus away from actual harms to focus on hypothetical doomsday scenarios.

Moreover, “frontier AI” as a term invokes the colonial mindset, further reinscribing the harmful dynamics between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
术语的危害:为什么我们应该拒绝所谓的 "前沿人工智能"
将人工智能(AI)视为“生存风险”的提倡者在2023年中期创造了“前沿人工智能”(frontier AI)这个新名词,指的是“具备足以对公共安全构成严重威胁的强大能力的基础模型”。这个新术语的推动者能够通过英国政府的前沿人工智能工作组(前身为基金会模型工作组)以及2023年11月举行的英国人工智能安全峰会来传播它。我认为采用“前沿人工智能”一词是有害的,会助长人工智能的炒作。推广这个新术语是它的支持者将公众讨论集中在他们认为最重要的人工智能相关风险上的一种方式,即“生存风险”——人工智能能够带来人类毁灭的场景。与此同时,“前沿人工智能”是对大规模生成式机器学习(ML)模型的重塑,这些模型已被证明会造成严重和普遍的危害(包括心理、社会和环境危害)。与“存在风险”不同,这些危害是实际存在的,而不是理论上的,而“前沿人工智能”一词将我们的集体注意力从实际危害转移到假想的世界末日情景上。此外,“前沿人工智能”一词引发了殖民思维,进一步重新定义了少数生产当今生成式人工智能模型的强大西方公司与最有可能因这些人工智能技术的开发和部署而遭受直接伤害的“全球南方”人民之间的有害动态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Against AI ethics: challenging the conventional narratives Legitimate expectations in the age of innovation Dehumanising education: AI and the capitalist capture of teaching An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures Justification optional: ChatGPT’s advice can still influence human judgments about moral dilemmas
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1