Zachary Tran , Julianne Byun , Ha Yeon Lee , Hans Boggs , Emma Y. Tomihama , Sharon C. Kiang
{"title":"Bias in artificial intelligence in vascular surgery","authors":"Zachary Tran , Julianne Byun , Ha Yeon Lee , Hans Boggs , Emma Y. Tomihama , Sharon C. Kiang","doi":"10.1053/j.semvascsurg.2023.07.003","DOIUrl":null,"url":null,"abstract":"<div><p><span>Application of artificial intelligence (AI) has revolutionized the utilization of big data, especially in patient care. The potential of deep learning models to learn without </span><em>a priori</em><span> assumption, or without prior learning, to connect seemingly unrelated information mixes excitement alongside hesitation to fully understand AI's limitations. Bias, ranging from data collection and input to algorithm development to finally human review of algorithm output affects AI's application to clinical patient presents unique challenges that differ significantly from biases in traditional analyses. Algorithm fairness, a new field of research within AI, aims to mitigate bias by evaluating the data at the preprocessing stage, optimizing during algorithm development, and evaluating algorithm output at the postprocessing stage. As the field continues to develop, being cognizant of the inherent biases and limitations related to black box decision making, biased data sets agnostic to patient-level disparities, wide variation of present methodologies, and lack of common reporting standards will require ongoing research to provide transparency to AI and its applications.</span></p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895796723000558","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Application of artificial intelligence (AI) has revolutionized the utilization of big data, especially in patient care. The potential of deep learning models to learn without a priori assumption, or without prior learning, to connect seemingly unrelated information mixes excitement alongside hesitation to fully understand AI's limitations. Bias, ranging from data collection and input to algorithm development to finally human review of algorithm output affects AI's application to clinical patient presents unique challenges that differ significantly from biases in traditional analyses. Algorithm fairness, a new field of research within AI, aims to mitigate bias by evaluating the data at the preprocessing stage, optimizing during algorithm development, and evaluating algorithm output at the postprocessing stage. As the field continues to develop, being cognizant of the inherent biases and limitations related to black box decision making, biased data sets agnostic to patient-level disparities, wide variation of present methodologies, and lack of common reporting standards will require ongoing research to provide transparency to AI and its applications.