Assessing and Mitigating Bias in Artificial Intelligence: A review

Deepak Sinwar, Akruti Sinha, Devika Sapra, Vijander Singh, Ghanshyam Raghuwanshi
{"title":"Assessing and Mitigating Bias in Artificial Intelligence: A review","authors":"Deepak Sinwar, Akruti Sinha, Devika Sapra, Vijander Singh, Ghanshyam Raghuwanshi","doi":"10.2174/2666255816666230523114425","DOIUrl":null,"url":null,"abstract":"\n\nThere has been an exponential increase in discussions about bias in Artificial Intelligence (AI) systems. Bias in AI has typically been defined as a divergence from standard statistical patterns in the output of an AI model, which could be due to a biased dataset or biased assumptions. While the bias in artificially taught models is attributed able to bias in the dataset provided by humans, there is still room for advancement in terms of bias mitigation in AI models. The failure to detect bias in datasets or models stems from the \"black box\" problem or a lack of understanding of algorithmic outcomes. This paper provides a comprehensive review of the analysis of the approaches provided by researchers and scholars to mitigate AI bias and investigate the several methods of employing a responsible AI model for decision-making processes. We clarify what bias means to different people, as well as provide the actual definition of bias in AI systems. In addition, the paper discussed the causes of bias in AI systems thereby permitting researchers to focus their efforts on minimising the causes and mitigating bias. Finally, we recommend the best direction for future research to ensure the discovery of the most accurate method for reducing bias in algorithms. We hope that this study will help researchers to think from different perspectives while developing unbiased systems.\n","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recent Advances in Computer Science and Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2174/2666255816666230523114425","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

Abstract

There has been an exponential increase in discussions about bias in Artificial Intelligence (AI) systems. Bias in AI has typically been defined as a divergence from standard statistical patterns in the output of an AI model, which could be due to a biased dataset or biased assumptions. While the bias in artificially taught models is attributed able to bias in the dataset provided by humans, there is still room for advancement in terms of bias mitigation in AI models. The failure to detect bias in datasets or models stems from the "black box" problem or a lack of understanding of algorithmic outcomes. This paper provides a comprehensive review of the analysis of the approaches provided by researchers and scholars to mitigate AI bias and investigate the several methods of employing a responsible AI model for decision-making processes. We clarify what bias means to different people, as well as provide the actual definition of bias in AI systems. In addition, the paper discussed the causes of bias in AI systems thereby permitting researchers to focus their efforts on minimising the causes and mitigating bias. Finally, we recommend the best direction for future research to ensure the discovery of the most accurate method for reducing bias in algorithms. We hope that this study will help researchers to think from different perspectives while developing unbiased systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估和缓解人工智能中的偏见:综述
关于人工智能系统中的偏见的讨论呈指数级增长。人工智能中的偏差通常被定义为人工智能模型输出中与标准统计模式的差异,这可能是由于有偏差的数据集或有偏差的假设造成的。虽然人工教学模型中的偏见可以归因于人类提供的数据集中的偏见,但在人工智能模型中,在减轻偏见方面仍有进步的空间。未能检测到数据集或模型中的偏差源于“黑匣子”问题或对算法结果缺乏了解。本文全面回顾了研究人员和学者为减轻人工智能偏见而提供的方法的分析,并研究了在决策过程中使用负责任的人工智能模型的几种方法。我们阐明了偏见对不同的人意味着什么,并提供了人工智能系统中偏见的实际定义。此外,该论文还讨论了人工智能系统中偏见的原因,从而使研究人员能够集中精力将原因降至最低并减轻偏见。最后,我们建议未来研究的最佳方向,以确保发现减少算法偏差的最准确方法。我们希望这项研究将帮助研究人员在开发无偏见系统的同时,从不同的角度进行思考。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Recent Advances in Computer Science and Communications
Recent Advances in Computer Science and Communications Computer Science-Computer Science (all)
CiteScore
2.50
自引率
0.00%
发文量
142
期刊最新文献
Flood Mapping and Damage Analysis Using Multispectral Sentinel-2 Satellite Imagery and Machine Learning Techniques Efficacy of Keystroke Dynamics-Based User Authentication in the Face of Language Complexity Innovation in Knowledge Economy: A Case Study of 3D Printing's Rise in Global Markets and India Cognitive Inherent SLR Enabled Survey for Software Defect Prediction An Era of Communication Technology Using Machine Learning Techniques in Medical Imaging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1