追究人工智能的责任:在医疗保健领域交付值得信赖的人工智能面临的挑战

IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS ACM Transactions on Computer-Human Interaction Pub Date : 2022-11-29 DOI:10.1145/3577009
R. Procter, P. Tolmie, M. Rouncefield
{"title":"追究人工智能的责任:在医疗保健领域交付值得信赖的人工智能面临的挑战","authors":"R. Procter, P. Tolmie, M. Rouncefield","doi":"10.1145/3577009","DOIUrl":null,"url":null,"abstract":"The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this article, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human–Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare\",\"authors\":\"R. Procter, P. Tolmie, M. Rouncefield\",\"doi\":\"10.1145/3577009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this article, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human–Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.\",\"PeriodicalId\":50917,\"journal\":{\"name\":\"ACM Transactions on Computer-Human Interaction\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2022-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Computer-Human Interaction\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3577009\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Computer-Human Interaction","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3577009","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 6

摘要

人工智能系统为其行为提供解释的必要性现在被广泛认为是采用人工智能的关键。在这篇文章中,我们研究了值得信赖的人工智能的问题,并探讨了在实践中实现这一点意味着什么,重点是医疗保健应用。这一领域的工作通常将值得信赖的人工智能视为涉及个人用户和人工智能系统的人机交互问题。然而,我们在这里认为,这忽略了组织问责制在人们如何在社会技术环境中思考和信任人工智能方面发挥的重要作用。为了说明组织问责制的重要性,我们在多学科团队会议上展示了乳腺癌症筛查和癌症治疗规划的人种学研究结果,以显示参与者如何让自己对彼此和他们所属的组织负责。我们利用这些发现丰富了对值得信赖的人工智能要求的现有理解,并概述了让人工智能对个人用户和组织负责的一些候选解决方案。最后,我们概述了这对未来开发值得信赖的人工智能的影响,包括我们提出的解决方案在不同应用环境中重复使用的方式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare
The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this article, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human–Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction 工程技术-计算机:控制论
CiteScore
8.50
自引率
5.40%
发文量
94
审稿时长
>12 weeks
期刊介绍: This ACM Transaction seeks to be the premier archival journal in the multidisciplinary field of human-computer interaction. Since its first issue in March 1994, it has presented work of the highest scientific quality that contributes to the practice in the present and future. The primary emphasis is on results of broad application, but the journal considers original work focused on specific domains, on special requirements, on ethical issues -- the full range of design, development, and use of interactive systems.
期刊最新文献
Unmaking electronic waste Household Wattch: Exploring opportunities for surveillance and consent through families’ household energy use data Self-Determination Theory and HCI Games Research: Unfulfilled Promises and Unquestioned Paradigms Carefully Unmaking the “Marginalized User:” A Diffractive Analysis of a Gay Online Community Gazing Heads: Investigating Gaze Perception in Video-Mediated Communication
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1