人工智能道德提升:道德参与的社会技术系统升级。

IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY Science and Engineering Ethics Pub Date : 2023-03-23 DOI:10.1007/s11948-023-00428-2
Richard Volkman, Katleen Gabriels
{"title":"人工智能道德提升:道德参与的社会技术系统升级。","authors":"Richard Volkman,&nbsp;Katleen Gabriels","doi":"10.1007/s11948-023-00428-2","DOIUrl":null,"url":null,"abstract":"<p><p>Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that 'AI mentors' could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"29 2","pages":"11"},"PeriodicalIF":2.7000,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10036265/pdf/","citationCount":"3","resultStr":"{\"title\":\"AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.\",\"authors\":\"Richard Volkman,&nbsp;Katleen Gabriels\",\"doi\":\"10.1007/s11948-023-00428-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that 'AI mentors' could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.</p>\",\"PeriodicalId\":49564,\"journal\":{\"name\":\"Science and Engineering Ethics\",\"volume\":\"29 2\",\"pages\":\"11\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2023-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10036265/pdf/\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science and Engineering Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1007/s11948-023-00428-2\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science and Engineering Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11948-023-00428-2","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 3

摘要

一些关于道德增强的建议将使用人工智能来增强(辅助增强)甚至取代(彻底增强)人类的道德推理或判断。详尽的增强建议将人工智能视为某种自给自足的神谕,它比我们自己的道德能力优越,表现在它能够可靠地为我们所有的道德问题提供“正确”的答案。我们认为这是一种错误的构建项目的方式,因为它假定我们已经知道了许多我们仍在解决的事情,并且反思这一事实甚至揭示了回避神谕方法的辅助建议的挑战。尽管如此,我们认为“人工智能导师”在我们的道德教育和培训中可以发挥重要作用。扩展AI苏格拉底对话者的思想,我们提出了一个由多个AI对话者组成的模块化系统,这些对话者有自己独特的观点,反映了他们在各种具体智慧传统中的训练。这种方法最大限度地减少了道德脱离的风险,而来自不同传统的多个模块的存在确保了多元主义得到维护。最后,我们反思了所有这些与人工智能道德增强项目中涉及的更广泛的道德超越概念之间的关系,认为如果我们要追求道德增强,我们需要建模的正是整个具体的道德参与社会技术系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.

Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that 'AI mentors' could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Science and Engineering Ethics
Science and Engineering Ethics 综合性期刊-工程:综合
CiteScore
10.70
自引率
5.40%
发文量
54
审稿时长
>12 weeks
期刊介绍: Science and Engineering Ethics is an international multidisciplinary journal dedicated to exploring ethical issues associated with science and engineering, covering professional education, research and practice as well as the effects of technological innovations and research findings on society. While the focus of this journal is on science and engineering, contributions from a broad range of disciplines, including social sciences and humanities, are welcomed. Areas of interest include, but are not limited to, ethics of new and emerging technologies, research ethics, computer ethics, energy ethics, animals and human subjects ethics, ethics education in science and engineering, ethics in design, biomedical ethics, values in technology and innovation. We welcome contributions that deal with these issues from an international perspective, particularly from countries that are underrepresented in these discussions.
期刊最新文献
Awareness of Jordanian Researchers About Predatory Journals: A Need for Training. Empathy's Role in Engineering Ethics: Empathizing with One's Self to Others Across the Globe. "Business as usual"? Safe-by-Design Vis-à-Vis Proclaimed Safety Cultures in Technology Development for the Bioeconomy. Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach. Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1