从理论到实践:统一可信人工智能的分类标准

IF 1.7 Q3 HEALTH CARE SCIENCES & SERVICES Health Policy Open Pub Date : 2024-09-05 DOI:10.1016/j.hpopen.2024.100128
Christos A. Makridis , Joshua Mueller , Theo Tiffany , Andrew A. Borkowski , John Zachary , Gil Alterovitz
{"title":"从理论到实践:统一可信人工智能的分类标准","authors":"Christos A. Makridis ,&nbsp;Joshua Mueller ,&nbsp;Theo Tiffany ,&nbsp;Andrew A. Borkowski ,&nbsp;John Zachary ,&nbsp;Gil Alterovitz","doi":"10.1016/j.hpopen.2024.100128","DOIUrl":null,"url":null,"abstract":"<div><div>The increasing capabilities of AI pose new risks and vulnerabilities for organizations and decision makers. Several trustworthy AI frameworks have been created by U.S. federal agencies and international organizations to outline the principles to which AI systems must adhere for their use to be considered responsible. Different trustworthy AI frameworks reflect the priorities and perspectives of different stakeholders, and there is no consensus on a single framework yet. We evaluate the leading frameworks and provide a holistic perspective on trustworthy AI values, allowing federal agencies to create agency-specific trustworthy AI strategies that account for unique institutional needs and priorities. We apply this approach to the Department of Veterans Affairs, an entity with largest health care system in US. Further, we contextualize our framework from the perspective of the federal government on how to leverage existing trustworthy AI frameworks to develop a set of guiding principles that can provide the foundation for an agency to design, develop, acquire, and use AI systems in a manner that simultaneously fosters trust and confidence and meets the requirements of established laws and regulations.</div></div>","PeriodicalId":34527,"journal":{"name":"Health Policy Open","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From theory to practice: Harmonizing taxonomies of trustworthy AI\",\"authors\":\"Christos A. Makridis ,&nbsp;Joshua Mueller ,&nbsp;Theo Tiffany ,&nbsp;Andrew A. Borkowski ,&nbsp;John Zachary ,&nbsp;Gil Alterovitz\",\"doi\":\"10.1016/j.hpopen.2024.100128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The increasing capabilities of AI pose new risks and vulnerabilities for organizations and decision makers. Several trustworthy AI frameworks have been created by U.S. federal agencies and international organizations to outline the principles to which AI systems must adhere for their use to be considered responsible. Different trustworthy AI frameworks reflect the priorities and perspectives of different stakeholders, and there is no consensus on a single framework yet. We evaluate the leading frameworks and provide a holistic perspective on trustworthy AI values, allowing federal agencies to create agency-specific trustworthy AI strategies that account for unique institutional needs and priorities. We apply this approach to the Department of Veterans Affairs, an entity with largest health care system in US. Further, we contextualize our framework from the perspective of the federal government on how to leverage existing trustworthy AI frameworks to develop a set of guiding principles that can provide the foundation for an agency to design, develop, acquire, and use AI systems in a manner that simultaneously fosters trust and confidence and meets the requirements of established laws and regulations.</div></div>\",\"PeriodicalId\":34527,\"journal\":{\"name\":\"Health Policy Open\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Policy Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590229624000133\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Policy Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590229624000133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

人工智能的能力日益增强,给组织和决策者带来了新的风险和漏洞。美国联邦机构和国际组织制定了多个可信人工智能框架,概述了人工智能系统必须遵守的原则,以确保其使用是负责任的。不同的可信人工智能框架反映了不同利益相关者的优先事项和观点,目前尚未就单一框架达成共识。我们对主要的框架进行了评估,并提供了关于可信赖的人工智能价值的整体观点,使联邦机构能够根据独特的机构需求和优先事项制定特定机构的可信赖的人工智能战略。我们将这种方法应用于退伍军人事务部,这是美国最大的医疗保健系统实体。此外,我们还从联邦政府的角度出发,介绍了如何利用现有的可信人工智能框架来制定一套指导原则,为机构设计、开发、获取和使用人工智能系统奠定基础,同时促进信任和信心,并满足既定法律法规的要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
From theory to practice: Harmonizing taxonomies of trustworthy AI
The increasing capabilities of AI pose new risks and vulnerabilities for organizations and decision makers. Several trustworthy AI frameworks have been created by U.S. federal agencies and international organizations to outline the principles to which AI systems must adhere for their use to be considered responsible. Different trustworthy AI frameworks reflect the priorities and perspectives of different stakeholders, and there is no consensus on a single framework yet. We evaluate the leading frameworks and provide a holistic perspective on trustworthy AI values, allowing federal agencies to create agency-specific trustworthy AI strategies that account for unique institutional needs and priorities. We apply this approach to the Department of Veterans Affairs, an entity with largest health care system in US. Further, we contextualize our framework from the perspective of the federal government on how to leverage existing trustworthy AI frameworks to develop a set of guiding principles that can provide the foundation for an agency to design, develop, acquire, and use AI systems in a manner that simultaneously fosters trust and confidence and meets the requirements of established laws and regulations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Health Policy Open
Health Policy Open Medicine-Health Policy
CiteScore
3.80
自引率
0.00%
发文量
21
审稿时长
40 weeks
期刊最新文献
Closing the equity gap: A call for policy and programmatic reforms to ensure inclusive and effective HIV prevention, treatment and care for persons with disabilities in Eastern and Southern Africa Patient’s willingness to pay for improved community health insurance in Tanzania Improving antibiotic prescribing – Recommendations for funding and pricing policies to enhance use of point-of-care tests From theory to practice: Harmonizing taxonomies of trustworthy AI How firearm legislation impacts firearm mortality internationally: A scoping review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1