专家还是权威?假定人工智能系统具有认识论优越性的奇特案例

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Minds and Machines Pub Date : 2024-07-06 DOI:10.1007/s11023-024-09681-1
Andrea Ferrario, Alessandro Facchini, Alberto Termine
{"title":"专家还是权威?假定人工智能系统具有认识论优越性的奇特案例","authors":"Andrea Ferrario, Alessandro Facchini, Alberto Termine","doi":"10.1007/s11023-024-09681-1","DOIUrl":null,"url":null,"abstract":"<p>The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a <i>hybrid</i> epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"14 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems\",\"authors\":\"Andrea Ferrario, Alessandro Facchini, Alberto Termine\",\"doi\":\"10.1007/s11023-024-09681-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a <i>hybrid</i> epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.</p>\",\"PeriodicalId\":51133,\"journal\":{\"name\":\"Minds and Machines\",\"volume\":\"14 1\",\"pages\":\"\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Minds and Machines\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11023-024-09681-1\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minds and Machines","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11023-024-09681-1","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

当代以机器学习为基础的人工智能系统的高预测准确性使一些学者认为,在某些情况下,我们应该赋予它们认识论上的专业知识和权威,使其超越人类。这种观点认为,人类在认识论上有义务依赖高精度人工智能系统的预测。与这种观点相反,我们在这项研究中声称,不可能赋予人工智能系统真正的认识论专业知识。事实上,根据美德认识论中关于专业知识和权威的论述,我们证明了认识论专业知识需要与理解力之间的关系,而人工智能系统并不满足这种关系,也不具备这些系统所不具备的智力能力。此外,按照分布认知理论,并根据克罗齐关于集体认识论代理的美德的论述,我们证明,如果人工智能系统被人类代理成功占有,就会出现一个混合认识论代理,它既可以成为认识论专家,也可以成为权威。因此,我们主张,上述混合代理是围绕对人工智能的信任及其认识论优势所产生的认识论义务展开讨论的适当对象。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems

The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a hybrid epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Minds and Machines
Minds and Machines 工程技术-计算机:人工智能
CiteScore
12.60
自引率
2.70%
发文量
30
审稿时长
>12 weeks
期刊介绍: Minds and Machines, affiliated with the Society for Machines and Mentality, serves as a platform for fostering critical dialogue between the AI and philosophical communities. With a focus on problems of shared interest, the journal actively encourages discussions on the philosophical aspects of computer science. Offering a global forum, Minds and Machines provides a space to debate and explore important and contentious issues within its editorial focus. The journal presents special editions dedicated to specific topics, invites critical responses to previously published works, and features review essays addressing current problem scenarios. By facilitating a diverse range of perspectives, Minds and Machines encourages a reevaluation of the status quo and the development of new insights. Through this collaborative approach, the journal aims to bridge the gap between AI and philosophy, fostering a tradition of critique and ensuring these fields remain connected and relevant.
期刊最新文献
Mapping the Ethics of Generative AI: A Comprehensive Scoping Review A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement Artificial Intelligence for the Internal Democracy of Political Parties A Causal Analysis of Harm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1