Measuring adherence to AI ethics: a methodology for assessing adherence to ethical principles in the use case of AI-enabled credit scoring application

Maria Pokholkova, Auxane Boch, Ellen Hohma, Christoph Lütge
{"title":"Measuring adherence to AI ethics: a methodology for assessing adherence to ethical principles in the use case of AI-enabled credit scoring application","authors":"Maria Pokholkova,&nbsp;Auxane Boch,&nbsp;Ellen Hohma,&nbsp;Christoph Lütge","doi":"10.1007/s43681-024-00468-9","DOIUrl":null,"url":null,"abstract":"<div><p>This article discusses the critical need to find solutions for ethically assessing artificial intelligence systems, underlining the importance of ethical principles in designing, developing, and employing these systems to enhance their acceptance in society. In particular, measuring AI applications’ adherence to ethical principles is determined to be a major concern. This research proposes a methodology for measuring an application’s adherence to acknowledged ethical principles. The proposed concept is grounded in existing research on quantification, specifically, Expert Workshop, which serves as a foundation of this study. The suggested method is tested on the use case of AI-enabled Credit Scoring applications using the ethical principle of transparency as an example. AI development, AI Ethics, finance, and regulation experts were invited to a workshop. The study’s findings underscore the importance of ethical AI implementation and highlight benefits and limitations for measuring ethical adherence. A proposed methodology thus offers insights into a foundation for future AI ethics assessments within and outside the financial industry, promoting responsible AI practices and constructive dialogue.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1291 - 1313"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00468-9.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00468-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This article discusses the critical need to find solutions for ethically assessing artificial intelligence systems, underlining the importance of ethical principles in designing, developing, and employing these systems to enhance their acceptance in society. In particular, measuring AI applications’ adherence to ethical principles is determined to be a major concern. This research proposes a methodology for measuring an application’s adherence to acknowledged ethical principles. The proposed concept is grounded in existing research on quantification, specifically, Expert Workshop, which serves as a foundation of this study. The suggested method is tested on the use case of AI-enabled Credit Scoring applications using the ethical principle of transparency as an example. AI development, AI Ethics, finance, and regulation experts were invited to a workshop. The study’s findings underscore the importance of ethical AI implementation and highlight benefits and limitations for measuring ethical adherence. A proposed methodology thus offers insights into a foundation for future AI ethics assessments within and outside the financial industry, promoting responsible AI practices and constructive dialogue.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
衡量人工智能伦理的遵守情况:在人工智能信用评分应用案例中评估伦理原则遵守情况的方法论
本文讨论了寻找道德评估人工智能系统的解决方案的迫切需要,强调了在设计、开发和使用这些系统时道德原则的重要性,以提高它们在社会中的接受度。特别是,衡量人工智能应用是否遵守道德原则被确定为一个主要问题。本研究提出了一种衡量应用程序是否遵守公认的道德原则的方法。提出的概念是基于现有的量化研究,特别是专家研讨会,这是本研究的基础。建议的方法以支持ai的信用评分应用程序的用例为例,使用透明度的道德原则进行测试。人工智能发展、人工智能伦理、金融和监管专家应邀参加了一个研讨会。该研究的结果强调了道德人工智能实施的重要性,并强调了衡量道德遵守的好处和局限性。因此,拟议的方法为金融行业内外未来的人工智能伦理评估提供了基础,促进了负责任的人工智能实践和建设性对话。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Technological transitions in artificial intelligence-driven cyberbullying mitigation on social media: a systematic review Do our students deserve better than distrust? How students show high integrity in their attitudes towards using AI Usable procedures for responsible experiments with artificial intelligence in education When researchers use AI: public trust, ethical judgments, and the perceived value of academic research Uncertain hands: ethical implications of AI dependency in medical education
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1