Christos A. Makridis , Joshua Mueller , Theo Tiffany , Andrew A. Borkowski , John Zachary , Gil Alterovitz
{"title":"From theory to practice: Harmonizing taxonomies of trustworthy AI","authors":"Christos A. Makridis , Joshua Mueller , Theo Tiffany , Andrew A. Borkowski , John Zachary , Gil Alterovitz","doi":"10.1016/j.hpopen.2024.100128","DOIUrl":null,"url":null,"abstract":"<div><div>The increasing capabilities of AI pose new risks and vulnerabilities for organizations and decision makers. Several trustworthy AI frameworks have been created by U.S. federal agencies and international organizations to outline the principles to which AI systems must adhere for their use to be considered responsible. Different trustworthy AI frameworks reflect the priorities and perspectives of different stakeholders, and there is no consensus on a single framework yet. We evaluate the leading frameworks and provide a holistic perspective on trustworthy AI values, allowing federal agencies to create agency-specific trustworthy AI strategies that account for unique institutional needs and priorities. We apply this approach to the Department of Veterans Affairs, an entity with largest health care system in US. Further, we contextualize our framework from the perspective of the federal government on how to leverage existing trustworthy AI frameworks to develop a set of guiding principles that can provide the foundation for an agency to design, develop, acquire, and use AI systems in a manner that simultaneously fosters trust and confidence and meets the requirements of established laws and regulations.</div></div>","PeriodicalId":34527,"journal":{"name":"Health Policy Open","volume":"7 ","pages":"Article 100128"},"PeriodicalIF":1.7000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Policy Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590229624000133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
The increasing capabilities of AI pose new risks and vulnerabilities for organizations and decision makers. Several trustworthy AI frameworks have been created by U.S. federal agencies and international organizations to outline the principles to which AI systems must adhere for their use to be considered responsible. Different trustworthy AI frameworks reflect the priorities and perspectives of different stakeholders, and there is no consensus on a single framework yet. We evaluate the leading frameworks and provide a holistic perspective on trustworthy AI values, allowing federal agencies to create agency-specific trustworthy AI strategies that account for unique institutional needs and priorities. We apply this approach to the Department of Veterans Affairs, an entity with largest health care system in US. Further, we contextualize our framework from the perspective of the federal government on how to leverage existing trustworthy AI frameworks to develop a set of guiding principles that can provide the foundation for an agency to design, develop, acquire, and use AI systems in a manner that simultaneously fosters trust and confidence and meets the requirements of established laws and regulations.