Addressing discriminatory bias in artificial intelligence systems operated by companies: An analysis of end-user perspectives

IF 11.1 1区 管理学 Q1 ENGINEERING, INDUSTRIAL Technovation Pub Date : 2024-10-15 DOI:10.1016/j.technovation.2024.103118
Rafael Lucas Borba , Iuri Emmanuel de Paula Ferreira , Paulo Henrique Bertucci Ramos
{"title":"Addressing discriminatory bias in artificial intelligence systems operated by companies: An analysis of end-user perspectives","authors":"Rafael Lucas Borba ,&nbsp;Iuri Emmanuel de Paula Ferreira ,&nbsp;Paulo Henrique Bertucci Ramos","doi":"10.1016/j.technovation.2024.103118","DOIUrl":null,"url":null,"abstract":"<div><div>The use of AI in different applications for different purposes has raised concerns due to discriminatory biases that have been identified in the technology. This paper aims to identify and analyze some of the main measures proposed by Bill No. 2338/23 of the Federative Republic of Brazil to combat discriminatory bias that companies should adopt to provide and/or operate fair and non-discriminatory AIs. To do so, it will first attempt to measure and analyze people's perceptions of the possibility that AI systems are discriminatory. For this a qualitative descriptive exploratory was made using as a reference sample the inhabitants of the Southeast region of Brasil. The survey results suggest that people are more aware that AIs are not neutral and that they may come to incorporate and reproduce prejudices and discriminations present in society. The incorporation of such biases is the result of issues related to the quality and diversity of the data used, inaccuracies in the algorithms employed, and biases on the part of both developers and operators. Thus, this work sought to reduce this gap and at the same time break the barrier of the lack of dialogue with the public in order to contribute to a democratic debate with society.</div></div>","PeriodicalId":49444,"journal":{"name":"Technovation","volume":"138 ","pages":"Article 103118"},"PeriodicalIF":11.1000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technovation","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166497224001688","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0

Abstract

The use of AI in different applications for different purposes has raised concerns due to discriminatory biases that have been identified in the technology. This paper aims to identify and analyze some of the main measures proposed by Bill No. 2338/23 of the Federative Republic of Brazil to combat discriminatory bias that companies should adopt to provide and/or operate fair and non-discriminatory AIs. To do so, it will first attempt to measure and analyze people's perceptions of the possibility that AI systems are discriminatory. For this a qualitative descriptive exploratory was made using as a reference sample the inhabitants of the Southeast region of Brasil. The survey results suggest that people are more aware that AIs are not neutral and that they may come to incorporate and reproduce prejudices and discriminations present in society. The incorporation of such biases is the result of issues related to the quality and diversity of the data used, inaccuracies in the algorithms employed, and biases on the part of both developers and operators. Thus, this work sought to reduce this gap and at the same time break the barrier of the lack of dialogue with the public in order to contribute to a democratic debate with society.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
解决公司运营的人工智能系统中的歧视性偏见:对最终用户观点的分析
人工智能在不同应用领域的不同用途引起了人们的关注,原因是在该技术中发现了歧视性偏见。本文旨在确定和分析巴西联邦共和国第 2338/23 号法案提出的一些主要措施,以打击歧视性偏见,公司应采取这些措施来提供和/或运行公平和非歧视性的人工智能。为此,本报告将首先尝试衡量和分析人们对人工智能系统可能具有歧视性的看法。为此,我们以巴西东南部地区的居民为参考样本,进行了定性描述探索。调查结果表明,人们更加意识到人工智能并非中立,它们可能会吸收和复制社会中存在的偏见和歧视。造成这种偏见的原因包括所使用数据的质量和多样性、所使用算法的不准确性以及开发人员和操作人员的偏见。因此,这项工作试图缩小这一差距,同时打破与公众缺乏对话的障碍,以促进与社会的民主辩论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Technovation
Technovation 管理科学-工程:工业
CiteScore
15.10
自引率
11.20%
发文量
208
审稿时长
91 days
期刊介绍: The interdisciplinary journal Technovation covers various aspects of technological innovation, exploring processes, products, and social impacts. It examines innovation in both process and product realms, including social innovations like regulatory frameworks and non-economic benefits. Topics range from emerging trends and capital for development to managing technology-intensive ventures and innovation in organizations of different sizes. It also discusses organizational structures, investment strategies for science and technology enterprises, and the roles of technological innovators. Additionally, it addresses technology transfer between developing countries and innovation across enterprise, political, and economic systems.
期刊最新文献
Capturing the breadth of value creation with science fiction storytelling: Evidence from smart service design workshops A classification framework for generative artificial intelligence for social good Digital technology and innovation:The impact of blockchain application on enterprise innovation Exploring and investigating the complementarity and multidimensionality of innovation for sustainability research: Past present and future Facilitator or figurehead? The impact of academician shareholder on corporate innovation: Evidence from China
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1