将可解释的AI与AI系统中的可信度和伦理相结合的见解

Meghana Kshirsagar, Krishn Kumar Gupt, G. Vaidya, C. Ryan, Joseph P. Sullivan, Vivek Kshirsagar
{"title":"将可解释的AI与AI系统中的可信度和伦理相结合的见解","authors":"Meghana Kshirsagar, Krishn Kumar Gupt, G. Vaidya, C. Ryan, Joseph P. Sullivan, Vivek Kshirsagar","doi":"10.4018/ijncr.310006","DOIUrl":null,"url":null,"abstract":"Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI\",\"authors\":\"Meghana Kshirsagar, Krishn Kumar Gupt, G. Vaidya, C. Ryan, Joseph P. Sullivan, Vivek Kshirsagar\",\"doi\":\"10.4018/ijncr.310006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.\",\"PeriodicalId\":369881,\"journal\":{\"name\":\"Int. J. Nat. Comput. Res.\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Nat. Comput. Res.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4018/ijncr.310006\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Nat. Comput. Res.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/ijncr.310006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

自人工智能(AI)技术出现以来的过去70年里,研究人员已经在各个领域展示和部署了包含人工智能的系统。在医疗人工智能和信用风险评估等关键系统中,缺乏模型可解释性导致了对关键道德和专业原则的忽视,这可能造成相当大的伤害。使用可解释性方法,开发人员可以检查他们的模型,而不仅仅是性能和识别错误。这提高了效率,降低了开发成本。文章总结说,将传统的人工智能系统转向负责任的人工智能工程,可以解决人工智能系统部署中提出的问题,并通过结合可解释的人工智能方法来缓解这些问题。最后,文章总结了未来人工智能系统的社会效益,以及通过部署值得信赖和道德的人工智能系统可能产生的收入市场份额。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI
Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI Concept Drift Adaptation in Intrusion Detection Systems Using Ensemble Learning Natural Computing of Human Facial Emotion Using Multi-Learning Fuzzy Approach Detection of Small Oranges Using YOLO v3 Feature Pyramid Mechanism Performance Parameter Evaluation of 7nm FinFET by Tuning Metal Work Function and High K Dielectrics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1