From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems

IF 3.1 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE AI (Basel, Switzerland) Pub Date : 2023-10-13 DOI:10.3390/ai4040046
Ellen Hohma, Christoph Lütge
{"title":"From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems","authors":"Ellen Hohma, Christoph Lütge","doi":"10.3390/ai4040046","DOIUrl":null,"url":null,"abstract":"The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":"4 1","pages":"0"},"PeriodicalIF":3.1000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI (Basel, Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/ai4040046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从可信原则到可信开发过程:人工智能系统可信开发的需求和要素
目前,学术界和工业界经常可以观察到将人工智能伦理从理论转向实践的努力,这表明在对负责任的人工智能的理论理解方面取得了重大成就。然而,它的实际应用目前面临挑战,因为将提议的原则转化为容易可行的行动的机制往往被认为是不明确的,而且还没有准备好付诸实践。特别是,缺乏与监管规定相一致的统一、标准化的方法,经常被从业者强调为实际实现人工智能治理的主要缺点。为了应对这些挑战,我们建议通过引入人工智能系统可信开发过程的概念,将重点从单纯的人工智能产品的可信度转移到开发过程的感知可信度。我们从对常见人工智能治理文件的半系统文献分析中得出这一过程,以确定实施负责任的人工智能的最重要措施,并将其与以欧盟为中心的监管框架对人工智能提供商的影响进行比较。评估由此产生的过程以及可信赖过程的衍生特征表明,尽管清晰度经常被认为是一个主要缺点,而且许多人工智能提供商倾向于在做出反应之前等待最终确定的法规,但拟议的人工智能治理机制的概述已经涵盖了许多具有约束力和非约束性的要求,这些要求围绕着类似的活动来解决基本风险。此外,虽然程序可靠性的许多因素已经实现,但由于目前提出的措施的模糊性,特别是看到了局限性,要求根据用例和系统背景详细说明措施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.20
自引率
0.00%
发文量
0
审稿时长
11 weeks
期刊最新文献
Can Artificial Intelligence Aid Diagnosis by Teleguided Point-of-Care Ultrasound? A Pilot Study for Evaluating a Novel Computer Algorithm for COVID-19 Diagnosis Using Lung Ultrasound. Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard Deep Learning Performance Characterization on GPUs for Various Quantization Frameworks From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1