意想不到的不平等:人工智能在医疗保健决策中的差异影响。

Journal of law and health Pub Date : 2021-01-01
Sahar Takshi
{"title":"意想不到的不平等:人工智能在医疗保健决策中的差异影响。","authors":"Sahar Takshi","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.</p>","PeriodicalId":73804,"journal":{"name":"Journal of law and health","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions.\",\"authors\":\"Sahar Takshi\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.</p>\",\"PeriodicalId\":73804,\"journal\":{\"name\":\"Journal of law and health\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of law and health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of law and health","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

医疗保健领域的系统性歧视困扰着边缘群体。医生错误地认为有色人种具有较高的疼痛耐受性,导致治疗不足。残疾妇女往往没有得到诊断,因为她们的症状被忽视了。低收入患者获得适当治疗的机会较少。这些模式和其他模式反映了长期以来在美国卫生系统中根深蒂固的差异。随着医疗保健行业采用人工智能和算法形成(AI)工具,监管机构解决医疗保健歧视问题至关重要。人工智能工具越来越多地用于医院、医生和保险公司的临床和行政决策,但目前还没有一个框架明确规定人工智能用户的非歧视义务。美国食品和药物管理局监管人工智能的权力有限,也没有寻求将反歧视原则纳入其指导方针。《平价医疗法案》第1557条并未被用于强制执行医疗保健人工智能中的不歧视,民权办公室也未充分利用这一条款。州一级医疗执照委员会的保护或医疗事故责任同样未经检验,而且尚未将不歧视义务扩大到人工智能。本文讨论了医疗保健人工智能中每项法律义务的作用,以及每个系统可以改进以解决歧视的方式。它强调了行业可以通过自我监管来制定非歧视标准的方式,并通过推荐标准和创建超级监管机构来解决人工智能带来的差异化影响。随着世界走向自动化,当务之急是消除对系统性歧视的持续担忧,以防止医疗保健领域进一步边缘化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions.

Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Ninth Amendment: An Underutilized Protection for Reproductive Choice. Distorted Burden Shifting and Barred Mitigation: Being a Stubborn 234 Years Old Ironically Hasn't Helped the Supreme Court Mature. How Bodily Autonomy Can Fail Against Vaccination Mandates: The Few vs. the Many. When Governors Prioritize Individual Freedom over Public Health: Tort Liability for Government Failures. Without Due Process of Law: The Dobbs Decision and Its Cataclysmic Impact on the Substantive Due Process and Privacy Rights of Ohio Women.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1