How to Make “AI” Intelligent; or, The Question of Epistemic Equality

Christopher Newfield
{"title":"How to Make “AI” Intelligent; or, The Question of Epistemic Equality","authors":"Christopher Newfield","doi":"10.1215/2834703x-10734076","DOIUrl":null,"url":null,"abstract":"Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Critical AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1215/2834703x-10734076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
如何让“人工智能”智能化或者《认识平等问题
批评人士已经发现了机器语言和深度学习系统中存在的一系列操作缺陷,这些系统现在被称为“人工智能”。讨论最多的五个是社会偏见,尤其是种族主义;不透明,用户无法评估结果是如何产生的;强制,即架构、数据集、算法等由设计师和平台控制,而不是由用户控制;系统性侵犯隐私;基于企业的研究缺乏学术自由,结果可能会根据企业目标进行炒作,如果不是,则会受到压制和扭曲。本文关注的是人工智能的第六个问题,即“智能”一词错误地描述了相关技术的实际状态和影响。为了填补在公共讨论中严格使用“智能”的空白,它分析了布莱恩·坎特韦尔·史密斯(Brian Cantwell Smith)的《人工智能的承诺》(the Promise of Artificial intelligence, 2019),指出人文学科通常使用史密斯严苛的“真正智能”概念。为了让这一概念在技术专家中广为流传,文章呼吁用人文学者在定性和定量学科之间的认知平等的基础上,从一开始就参与“人工智能”研究项目的构建和评估的制度来取代C. P. Snow在20世纪50年代制定的两种文化等级制度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Photographic Pipeline of Machine Vision; or, Machine Vision's Latent Photographic Theory The Shame Machine: Who Profits in the New Age of Humiliation Thick Description for Critical AI: Generating Data Capitalism and Provocations for a Multisensory Approach How to Make “AI” Intelligent; or, The Question of Epistemic Equality Resisting AI: An Anti-fascist Approach to Artificial Intelligence
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1