A. Nichol, Meghan Halley, Carole A Federico, Mildred K. Cho, Pamela L Sankar
{"title":"医疗人工智能发展中的道德参与和脱离。","authors":"A. Nichol, Meghan Halley, Carole A Federico, Mildred K. Cho, Pamela L Sankar","doi":"10.1080/23294515.2024.2336906","DOIUrl":null,"url":null,"abstract":"BACKGROUND\nMachine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.\n\n\nMETHODS\nWe conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.\n\n\nRESULTS\nParticipants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.\n\n\nCONCLUSIONS\nThese findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.","PeriodicalId":38118,"journal":{"name":"AJOB Empirical Bioethics","volume":"244 6","pages":"1-10"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Moral Engagement and Disengagement in Health Care AI Development.\",\"authors\":\"A. Nichol, Meghan Halley, Carole A Federico, Mildred K. Cho, Pamela L Sankar\",\"doi\":\"10.1080/23294515.2024.2336906\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"BACKGROUND\\nMachine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.\\n\\n\\nMETHODS\\nWe conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.\\n\\n\\nRESULTS\\nParticipants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.\\n\\n\\nCONCLUSIONS\\nThese findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.\",\"PeriodicalId\":38118,\"journal\":{\"name\":\"AJOB Empirical Bioethics\",\"volume\":\"244 6\",\"pages\":\"1-10\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AJOB Empirical Bioethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/23294515.2024.2336906\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJOB Empirical Bioethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/23294515.2024.2336906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0
摘要
背景机器学习(ML)在医疗保健领域的应用越来越广泛,可能会对患者、临床医生、医疗系统和公众造成伤害。对此,监管机构提出了一种方法,将减轻潜在危害的责任更多地转移给 ML 开发人员。这种方法要想行之有效,就要求人工智能开发者认识到、接受并承担减轻危害的责任。我们对美国医疗保健领域的 ML 预测分析应用软件开发人员进行了 40 次半结构式访谈。结果参与者对个人责任的看法大相径庭,既有参与道德建设的例子,也有不参与道德建设的例子,尽管形式各不相同。虽然大多数参与者(70%)发表了表明道德参与的声明,但这些声明大多反映了对道德问题的认识,只有一小部分包含了参与的其他要素,如认识到责任、与个人价值观保持一致、解决利益冲突以及采取行动的机会。此外,我们还发现了八类不同的道德疏离现象,这些现象反映出人们努力将潜在的危害降到最低,或转移个人在预防或减轻危害方面的责任。 结论:这些研究结果表明,道德 ML 的发展可能存在促进因素和障碍,这些因素可以通过鼓励道德参与或阻止道德疏离发挥作用。如果不对人工智能开发者进行有关责任范围和如何履行责任的教育和指导,依赖于人工智能开发者认识、接受和履行减轻危害责任的监管方法可能难以取得成功。
Moral Engagement and Disengagement in Health Care AI Development.
BACKGROUND
Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.
METHODS
We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.
RESULTS
Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.
CONCLUSIONS
These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.