人工智能算法中社会技术性别偏见的系统回顾

IF 3.1 3区 管理学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Online Information Review Pub Date : 2023-03-14 DOI:10.1108/oir-08-2021-0452
P. Hall, D. Ellis
{"title":"人工智能算法中社会技术性别偏见的系统回顾","authors":"P. Hall, D. Ellis","doi":"10.1108/oir-08-2021-0452","DOIUrl":null,"url":null,"abstract":"PurposeGender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.Design/methodology/approachA comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.FindingsMost previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).Originality/valueThis systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.Peer reviewThe peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452","PeriodicalId":54683,"journal":{"name":"Online Information Review","volume":"3 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A systematic review of socio-technical gender bias in AI algorithms\",\"authors\":\"P. Hall, D. Ellis\",\"doi\":\"10.1108/oir-08-2021-0452\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeGender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.Design/methodology/approachA comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.FindingsMost previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).Originality/valueThis systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.Peer reviewThe peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452\",\"PeriodicalId\":54683,\"journal\":{\"name\":\"Online Information Review\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-03-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Information Review\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1108/oir-08-2021-0452\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Information Review","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1108/oir-08-2021-0452","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 1

摘要

在人工智能算法变得无处不在、延续和加剧性别偏见之前,应优先解决人工智能(AI)中的性别偏见。虽然这一问题已被确定为既定的研究和政策议程,但缺乏对从社会技术角度专门解决性别偏见问题的现有研究进行有凝聚力的审查。因此,本研究的目的是确定人工智能算法中性别偏见的社会原因和后果,并提出解决方案。设计/方法/方法根据既定的方案进行全面系统的审查,以确保准确和可核查地识别合适的物品。该过程揭示了社会技术框架中的177篇文章,其中64篇文章被选中进行深入分析。大多数先前的研究都集中在技术上,而不是人工智能偏见的社会原因、后果和解决方案上。从社会的角度来看,人工智能算法中的性别偏见可以同样归因于算法设计和训练数据集。社会后果是广泛的,现有偏见的放大是最常见的,占28%。社会解决方案集中在算法设计上,特别是提高人工智能开发团队的多样性(30%),提高意识(23%),人在环(23%)和将道德融入设计过程(21%)。原创性/价值本系统综述首次在社会技术框架内从社会角度关注人工智能算法中的性别偏见。确定偏见的主要原因和后果以及潜在解决方案的细分,为人工智能伦理领域的未来研究和政策提供了方向。同行评议这篇文章的同行评议历史可以在https://publons.com/publon/10.1108/OIR-08-2021-0452上找到
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A systematic review of socio-technical gender bias in AI algorithms
PurposeGender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.Design/methodology/approachA comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.FindingsMost previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).Originality/valueThis systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.Peer reviewThe peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Online Information Review
Online Information Review 工程技术-计算机:信息系统
CiteScore
6.90
自引率
16.10%
发文量
67
审稿时长
6 months
期刊介绍: The journal provides a multi-disciplinary forum for scholars from a range of fields, including information studies/iSchools, data studies, internet studies, media and communication studies and information systems. Publishes research on the social, political and ethical aspects of emergent digital information practices and platforms, and welcomes submissions that draw upon critical and socio-technical perspectives in order to address these developments. Welcomes empirical, conceptual and methodological contributions on any topics relevant to the broad field of digital information and communication, however we are particularly interested in receiving submissions that address emerging issues around the below topics. Coverage includes (but is not limited to): •Online communities, social networking and social media, including online political communication; crowdsourcing; positive computing and wellbeing. •The social drivers and implications of emerging data practices, including open data; big data; data journeys and flows; and research data management. •Digital transformations including organisations’ use of information technologies (e.g. Internet of Things and digitisation of user experience) to improve economic and social welfare, health and wellbeing, and protect the environment. •Developments in digital scholarship and the production and use of scholarly content. •Online and digital research methods, including their ethical aspects.
期刊最新文献
A contribution-based indicator of research productivity: theoretical definition and empirical testing in the field of communication Bibliometric analysis of literature on social media trends during the COVID-19 pandemic Open science policies as regarded by the communities of researchers from the basic sciences in the scientific periphery Shaming behavior in online communities: exploring a new configuration of digital conversations Who corrects misinformation online? Self-perceived media literacy and the moderating role of reflective judgment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1