Addressing trade-offs in co-designing principles for ethical AI: perspectives from an industry-academia collaboration

Amelia Katirai, Yusuke Nagato
{"title":"Addressing trade-offs in co-designing principles for ethical AI: perspectives from an industry-academia collaboration","authors":"Amelia Katirai,&nbsp;Yusuke Nagato","doi":"10.1007/s43681-024-00477-8","DOIUrl":null,"url":null,"abstract":"<div><p>The development and deployment of artificial intelligence (AI) has rapidly outpaced regulation. As a result, many organizations opt to develop their own principles for the ethical development of AI, though little research has examined the processes through which they are developed. Prior research indicates that these processes involve perceived trade-offs between competing considerations, and primarily between ethical concerns and organizational benefits or technological development. In this paper, we report on a novel, collaborative initiative in Japan between researchers in the humanities and social sciences, and industry actors to co-design organizational AI ethics principles. We analyzed the minutes from 20 meetings from the formative phase of the development of these principles using an inductive process drawing on thematic analysis, to identify the issues of importance to participants. Through this, we identified four core trade-offs faced by participants. We find that, contrary to prior literature, participants were not just concerned with trade-offs between ethical concerns and organizational benefits or technological development, but also between competing, ethically-oriented considerations. We use the results of this study to highlight a need for further research to understand the longer-term impact on organizations and on society of organization-led approaches to AI ethics.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 6","pages":"5611 - 5619"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00477-8.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00477-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The development and deployment of artificial intelligence (AI) has rapidly outpaced regulation. As a result, many organizations opt to develop their own principles for the ethical development of AI, though little research has examined the processes through which they are developed. Prior research indicates that these processes involve perceived trade-offs between competing considerations, and primarily between ethical concerns and organizational benefits or technological development. In this paper, we report on a novel, collaborative initiative in Japan between researchers in the humanities and social sciences, and industry actors to co-design organizational AI ethics principles. We analyzed the minutes from 20 meetings from the formative phase of the development of these principles using an inductive process drawing on thematic analysis, to identify the issues of importance to participants. Through this, we identified four core trade-offs faced by participants. We find that, contrary to prior literature, participants were not just concerned with trade-offs between ethical concerns and organizational benefits or technological development, but also between competing, ethically-oriented considerations. We use the results of this study to highlight a need for further research to understand the longer-term impact on organizations and on society of organization-led approaches to AI ethics.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在共同设计人工智能伦理原则时权衡利弊:产学合作的视角
人工智能(AI)的发展和部署已经迅速超过了监管。因此,许多组织选择为人工智能的道德发展制定自己的原则,尽管很少有研究检查它们的发展过程。先前的研究表明,这些过程涉及竞争性考虑之间的感知权衡,主要是伦理问题与组织利益或技术发展之间的权衡。在本文中,我们报告了日本人文社会科学研究人员和行业参与者之间的一项新颖的合作倡议,以共同设计组织人工智能伦理原则。我们分析了20次会议的会议记录,从这些原则发展的形成阶段开始,使用主题分析的归纳过程,以确定对参与者重要的问题。通过这种方法,我们确定了参与者面临的四个核心权衡。我们发现,与先前的文献相反,参与者不仅关心伦理问题与组织利益或技术发展之间的权衡,而且还关心竞争,道德导向的考虑。我们利用这项研究的结果来强调进一步研究的必要性,以了解组织主导的人工智能伦理方法对组织和社会的长期影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
aiHumanoid v11.9: a large concept model for autonomous ethical reasoning in clinical AI Towards a clinical large language model: an ethico-legal case study analysis A lacanian re-reading of sex robots: human subjectivity and programmable compliance on desire Culturally contextual datasheets: a framework for embedding cultural reflexivity in global AI governance AI Trustworthiness Index for Healthcare (AITI-H): conceptualization, structure, and development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1