机器人的规则,以及为什么医疗人工智能打破了这些规则。

IF 2.5 2区 哲学 Q1 ETHICS Journal of Law and the Biosciences Pub Date : 2023-01-01 DOI:10.1093/jlb/lsad001
Barbara J Evans
{"title":"机器人的规则,以及为什么医疗人工智能打破了这些规则。","authors":"Barbara J Evans","doi":"10.1093/jlb/lsad001","DOIUrl":null,"url":null,"abstract":"<p><p>This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.</p>","PeriodicalId":56266,"journal":{"name":"Journal of Law and the Biosciences","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9934949/pdf/","citationCount":"1","resultStr":"{\"title\":\"Rules for robots, and why medical AI breaks them.\",\"authors\":\"Barbara J Evans\",\"doi\":\"10.1093/jlb/lsad001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.</p>\",\"PeriodicalId\":56266,\"journal\":{\"name\":\"Journal of Law and the Biosciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9934949/pdf/\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Law and the Biosciences\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/jlb/lsad001\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Law and the Biosciences","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/jlb/lsad001","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 1

摘要

本文批评了对AI/ML计算工具保护人权的一般规则的追求。《白宫人工智能权利法案蓝图》(White House Blueprint for AI Bill of Rights)是最近的一次尝试,但在本文探讨的方面失败了。伦理法律分析在将AI/ML工具作为一个类别,从部署AI工具的特定环境中抽象出来的程度是有限的。卫生技术为这一原则提供了一个很好的例子。人工智能/机器学习医疗软件的突出困境是,隐私政策有可能破坏分配正义,迫使人们在两种相互竞争的隐私保护愿景之间做出选择。第一种观点强调个人同意,在1970年后赢得了生物伦理学家、信息隐私理论家和政策制定者的青睐,但它显示出一种不祥的可能性,即人工智能训练数据可能会导致医疗保健不公平。另一种选择是,来自医疗隐私法的旧的基于责任的方法,与对20世纪后期美国法律和伦理如何将原子自治作为最高道德善的广泛批评相一致,忽视了关怀、社会相互依存、正义和公平的原则。当个人数据的使用具有很高的社会价值时,忽视这些选择的背景可能会产生次优政策——比如在医学和许多其他领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Rules for robots, and why medical AI breaks them.

This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Law and the Biosciences
Journal of Law and the Biosciences Medicine-Medicine (miscellaneous)
CiteScore
7.40
自引率
5.90%
发文量
35
审稿时长
13 weeks
期刊介绍: The Journal of Law and the Biosciences (JLB) is the first fully Open Access peer-reviewed legal journal focused on the advances at the intersection of law and the biosciences. A co-venture between Duke University, Harvard University Law School, and Stanford University, and published by Oxford University Press, this open access, online, and interdisciplinary academic journal publishes cutting-edge scholarship in this important new field. The Journal contains original and response articles, essays, and commentaries on a wide range of topics, including bioethics, neuroethics, genetics, reproductive technologies, stem cells, enhancement, patent law, and food and drug regulation. JLB is published as one volume with three issues per year with new articles posted online on an ongoing basis.
期刊最新文献
The new EU-US data protection framework's implications for healthcare. The new regulation of non-medical neurotechnologies in the European Union: overview and reflection. Implementing the human right to science in the context of health: introduction to the special issue. Biosimilar approval pathways: comparing the roles of five medicines regulators. Industry price guarantees for publicly funded medicines: learning from Project NextGen for pandemics and beyond.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1