{"title":"Ethicara for Responsible AI in Healthcare: A System for Bias Detection and AI Risk Management.","authors":"Maria Kritharidou, Georgios Chrysogonidis, Tasos Ventouris, Vaios Tsarapastsanis, Danai Aristeridou, Anastasia Karatzia, Veena Calambur, Ahsan Huda, Sabrina Hsueh","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>The increasing torrents of health AI innovations hold promise for facilitating the delivery of patient-centered care. Yet the enablement and adoption of AI innovations in the healthcare and life science industries can be challenging with the rising concerns of AI risks and the potential harms to health equity. This paper describes Ethicara, a system that enables health AI risk assessment for responsible AI model development. Ethicara works by orchestrating a collection of self-analytics services that detect and mitigate bias and increase model transparency from harmonized data models. For the lack of risk controls currently in the health AI development and deployment process, the self-analytics tools enhanced by Ethicara are expected to provide repeatable and measurable controls to operationalize voluntary risk management frameworks and guidelines (e.g., NIST RMF, FDA GMLP) and regulatory requirements emerging from the upcoming AI regulations (e.g., EU AI Act, US Blueprint for an AI Bill of Rights). In addition, Ethicara provides plug-ins via which analytics results are incorporated into healthcare applications. This paper provides an overview of Ethicara's architecture, pipeline, and technical components and showcases the system's capability to facilitate responsible AI use, and exemplifies the types of AI risk controls it enables in the healthcare and life science industry.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"2023-2032"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11492113/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AMIA ... Annual Symposium proceedings. AMIA Symposium","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The increasing torrents of health AI innovations hold promise for facilitating the delivery of patient-centered care. Yet the enablement and adoption of AI innovations in the healthcare and life science industries can be challenging with the rising concerns of AI risks and the potential harms to health equity. This paper describes Ethicara, a system that enables health AI risk assessment for responsible AI model development. Ethicara works by orchestrating a collection of self-analytics services that detect and mitigate bias and increase model transparency from harmonized data models. For the lack of risk controls currently in the health AI development and deployment process, the self-analytics tools enhanced by Ethicara are expected to provide repeatable and measurable controls to operationalize voluntary risk management frameworks and guidelines (e.g., NIST RMF, FDA GMLP) and regulatory requirements emerging from the upcoming AI regulations (e.g., EU AI Act, US Blueprint for an AI Bill of Rights). In addition, Ethicara provides plug-ins via which analytics results are incorporated into healthcare applications. This paper provides an overview of Ethicara's architecture, pipeline, and technical components and showcases the system's capability to facilitate responsible AI use, and exemplifies the types of AI risk controls it enables in the healthcare and life science industry.