{"title":"人工智能算法中社会技术性别偏见的系统回顾","authors":"P. Hall, D. Ellis","doi":"10.1108/oir-08-2021-0452","DOIUrl":null,"url":null,"abstract":"PurposeGender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.Design/methodology/approachA comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.FindingsMost previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).Originality/valueThis systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.Peer reviewThe peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452","PeriodicalId":54683,"journal":{"name":"Online Information Review","volume":"3 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A systematic review of socio-technical gender bias in AI algorithms\",\"authors\":\"P. Hall, D. Ellis\",\"doi\":\"10.1108/oir-08-2021-0452\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"PurposeGender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.Design/methodology/approachA comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.FindingsMost previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).Originality/valueThis systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.Peer reviewThe peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452\",\"PeriodicalId\":54683,\"journal\":{\"name\":\"Online Information Review\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-03-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Information Review\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1108/oir-08-2021-0452\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Information Review","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1108/oir-08-2021-0452","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
A systematic review of socio-technical gender bias in AI algorithms
PurposeGender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.Design/methodology/approachA comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.FindingsMost previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).Originality/valueThis systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.Peer reviewThe peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452
期刊介绍:
The journal provides a multi-disciplinary forum for scholars from a range of fields, including information studies/iSchools, data studies, internet studies, media and communication studies and information systems.
Publishes research on the social, political and ethical aspects of emergent digital information practices and platforms, and welcomes submissions that draw upon critical and socio-technical perspectives in order to address these developments.
Welcomes empirical, conceptual and methodological contributions on any topics relevant to the broad field of digital information and communication, however we are particularly interested in receiving submissions that address emerging issues around the below topics.
Coverage includes (but is not limited to):
•Online communities, social networking and social media, including online political communication; crowdsourcing; positive computing and wellbeing.
•The social drivers and implications of emerging data practices, including open data; big data; data journeys and flows; and research data management.
•Digital transformations including organisations’ use of information technologies (e.g. Internet of Things and digitisation of user experience) to improve economic and social welfare, health and wellbeing, and protect the environment.
•Developments in digital scholarship and the production and use of scholarly content.
•Online and digital research methods, including their ethical aspects.