了解并减少成像人工智能中的偏差

IF 5.2 1区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Radiographics Pub Date : 2024-04-18 DOI:10.1148/rg.230067
Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan
{"title":"了解并减少成像人工智能中的偏差","authors":"Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan","doi":"10.1148/rg.230067","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. <i>Bias</i> may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, <i>cognitive bias</i> refers to systematic deviation from objective judgment due to reliance on heuristics, and <i>statistical bias</i> refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.</p><p>Published under a CC BY 4.0 license.</p><p>Test Your Knowledge questions for this article are available in the supplemental material.</p><p>See the invited commentary by Rouzrokh and Erickson in this issue.</p>","PeriodicalId":54512,"journal":{"name":"Radiographics","volume":"50 1","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Understanding and Mitigating Bias in Imaging Artificial Intelligence\",\"authors\":\"Ali S. Tejani, Yee Seng Ng, Yin Xi, Jesse C. Rayan\",\"doi\":\"10.1148/rg.230067\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. <i>Bias</i> may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, <i>cognitive bias</i> refers to systematic deviation from objective judgment due to reliance on heuristics, and <i>statistical bias</i> refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.</p><p>Published under a CC BY 4.0 license.</p><p>Test Your Knowledge questions for this article are available in the supplemental material.</p><p>See the invited commentary by Rouzrokh and Erickson in this issue.</p>\",\"PeriodicalId\":54512,\"journal\":{\"name\":\"Radiographics\",\"volume\":\"50 1\",\"pages\":\"\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiographics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1148/rg.230067\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiographics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/rg.230067","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)算法在模型开发的多个阶段都容易出现偏差,有可能加剧健康差异。然而,成像人工智能中的偏见是一个复杂的话题,包含多种并存的定义。偏见可能是指由于事先存在的态度或信念,有意或无意地对某个人或群体的不平等偏好。然而,认知偏差指的是由于依赖启发式方法而系统性地偏离客观判断,统计偏差指的是真实值与预期值之间的差异,通常表现为模型预测中的系统误差(即模型的输出不能代表真实世界的情况)。根据有偏差的模型做出的临床决策,可能会因为根据不准确的人工智能结果采取行动而导致对患者的伤害,或者由于患者群体之间的表现不同而加剧健康不公平。不过,虽然在这种情况下不公平的偏见可能会伤害患者,但利用公平偏见的谨慎方法可以解决少数群体或罕见疾病代表性不足的问题。放射科医生还应注意人工智能部署后的偏差,如自动化偏差,或尽管有相反的证据,但仍倾向于同意自动决策。了解成像人工智能偏见的常见来源以及使用有偏见模型的后果,可以指导采取预防措施来减轻其影响。因此,作者将重点放在成像机器学习生命周期各阶段的偏见来源上,试图为在实践中使用人工智能工具或与数据科学家和工程师合作开发人工智能工具的普通放射科医生简化可能令人生畏的技术术语。作者回顾了人工智能中偏差的定义,描述了常见的偏差来源,并提出了指导质量控制措施的建议,以减轻成像人工智能中偏差的影响。本文以 CC BY 4.0 许可发布。本文的 "知识测试 "问题可在补充材料中找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Understanding and Mitigating Bias in Imaging Artificial Intelligence

Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI.

Published under a CC BY 4.0 license.

Test Your Knowledge questions for this article are available in the supplemental material.

See the invited commentary by Rouzrokh and Erickson in this issue.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Radiographics
Radiographics 医学-核医学
CiteScore
8.20
自引率
5.50%
发文量
224
审稿时长
4-8 weeks
期刊介绍: Launched by the Radiological Society of North America (RSNA) in 1981, RadioGraphics is one of the premier education journals in diagnostic radiology. Each bimonthly issue features 15–20 practice-focused articles spanning the full spectrum of radiologic subspecialties and addressing topics such as diagnostic imaging techniques, imaging features of a disease or group of diseases, radiologic-pathologic correlation, practice policy and quality initiatives, imaging physics, informatics, and lifelong learning. A special issue, a monograph focused on a single subspecialty or on a crossover topic of interest to multiple subspecialties, is published each October. Each issue offers more than a dozen opportunities to earn continuing medical education credits that qualify for AMA PRA Category 1 CreditTM and all online activities can be applied toward the ABR MOC Self-Assessment Requirement.
期刊最新文献
Ankle and Foot Injuries in the Emergency Department: Checklist-based Approach to Radiographs. More than Skin Deep: Imaging of Dermatologic Disease in the Head and Neck. Neonatal Liver Imaging: Techniques, Role of Imaging, and Indications. Improving Diagnosis of Acute Cholecystitis with US: New Paradigms. Optimizing the Radiology Readout Experience.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1