Moral Engagement and Disengagement in Health Care AI Development.

Q1 Arts and Humanities AJOB Empirical Bioethics Pub Date : 2024-04-08 DOI:10.1080/23294515.2024.2336906
A. Nichol, Meghan Halley, Carole A Federico, Mildred K. Cho, Pamela L Sankar
{"title":"Moral Engagement and Disengagement in Health Care AI Development.","authors":"A. Nichol, Meghan Halley, Carole A Federico, Mildred K. Cho, Pamela L Sankar","doi":"10.1080/23294515.2024.2336906","DOIUrl":null,"url":null,"abstract":"BACKGROUND\nMachine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.\n\n\nMETHODS\nWe conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.\n\n\nRESULTS\nParticipants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.\n\n\nCONCLUSIONS\nThese findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.","PeriodicalId":38118,"journal":{"name":"AJOB Empirical Bioethics","volume":"244 6","pages":"1-10"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJOB Empirical Bioethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/23294515.2024.2336906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

Abstract

BACKGROUND Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms. METHODS We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States. RESULTS Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms. CONCLUSIONS These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
医疗人工智能发展中的道德参与和脱离。
背景机器学习(ML)在医疗保健领域的应用越来越广泛,可能会对患者、临床医生、医疗系统和公众造成伤害。对此,监管机构提出了一种方法,将减轻潜在危害的责任更多地转移给 ML 开发人员。这种方法要想行之有效,就要求人工智能开发者认识到、接受并承担减轻危害的责任。我们对美国医疗保健领域的 ML 预测分析应用软件开发人员进行了 40 次半结构式访谈。结果参与者对个人责任的看法大相径庭,既有参与道德建设的例子,也有不参与道德建设的例子,尽管形式各不相同。虽然大多数参与者(70%)发表了表明道德参与的声明,但这些声明大多反映了对道德问题的认识,只有一小部分包含了参与的其他要素,如认识到责任、与个人价值观保持一致、解决利益冲突以及采取行动的机会。此外,我们还发现了八类不同的道德疏离现象,这些现象反映出人们努力将潜在的危害降到最低,或转移个人在预防或减轻危害方面的责任。 结论:这些研究结果表明,道德 ML 的发展可能存在促进因素和障碍,这些因素可以通过鼓励道德参与或阻止道德疏离发挥作用。如果不对人工智能开发者进行有关责任范围和如何履行责任的教育和指导,依赖于人工智能开发者认识、接受和履行减轻危害责任的监管方法可能难以取得成功。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
AJOB Empirical Bioethics
AJOB Empirical Bioethics Arts and Humanities-Philosophy
CiteScore
3.90
自引率
0.00%
发文量
21
期刊最新文献
Enhancing Animals is "Still Genetics": Perspectives of Genome Scientists and Policymakers on Animal and Human Enhancement. Associations Between the Legalization and Implementation of Medical Aid in Dying and Suicide Rates in the United States. Ethics Consultation in U.S. Pediatric Hospitals: Adherence to National Practice Standards. Monitored and Cared for at Home? Privacy Concerns When Using Smart Home Health Technologies to Care for Older Persons. Advance Medical Decision-Making Differs Across First- and Third-Person Perspectives.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1