{"title":"这个人工智能有性别歧视吗?有偏见的人工智能的拟人化外观和可解释性对用户偏见认知和信任的影响","authors":"Hou Tsung-Yu , Tseng Yu-Chia , Yuan Chien Wen (Tina)","doi":"10.1016/j.ijinfomgt.2024.102775","DOIUrl":null,"url":null,"abstract":"<div><p>Biases in artificial intelligence (AI), a pressing issue in human-AI interaction, can be exacerbated by AI systems’ opaqueness. This paper reports on our development of a user-centered explainable-AI approach to reducing such opaqueness, guided by the theoretical framework of anthropomorphism and the results of two 3 × 3 between-subjects experiments (n = 207 and n = 223). Specifically, those experiments investigated how, in a gender-biased hiring situation, three levels of AI human-likeness (low, medium, high) and three levels of richness of AI explanation (none, lean, rich) influenced users’ 1) perceptions of AI bias and 2) adoption of AI’s recommendations, as well as how such perceptions and adoption varied across participant characteristics such as gender and pre-existing trust in AI. We found that comprehensive explanations helped users to recognize AI bias and mitigate its influence, and that this effect was particularly pronounced among females in a scenario where females were being discriminated against. Follow-up interviews corroborated our quantitative findings. These results can usefully inform explainable AI interface design.</p></div>","PeriodicalId":48422,"journal":{"name":"International Journal of Information Management","volume":"76 ","pages":"Article 102775"},"PeriodicalIF":20.1000,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust\",\"authors\":\"Hou Tsung-Yu , Tseng Yu-Chia , Yuan Chien Wen (Tina)\",\"doi\":\"10.1016/j.ijinfomgt.2024.102775\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Biases in artificial intelligence (AI), a pressing issue in human-AI interaction, can be exacerbated by AI systems’ opaqueness. This paper reports on our development of a user-centered explainable-AI approach to reducing such opaqueness, guided by the theoretical framework of anthropomorphism and the results of two 3 × 3 between-subjects experiments (n = 207 and n = 223). Specifically, those experiments investigated how, in a gender-biased hiring situation, three levels of AI human-likeness (low, medium, high) and three levels of richness of AI explanation (none, lean, rich) influenced users’ 1) perceptions of AI bias and 2) adoption of AI’s recommendations, as well as how such perceptions and adoption varied across participant characteristics such as gender and pre-existing trust in AI. We found that comprehensive explanations helped users to recognize AI bias and mitigate its influence, and that this effect was particularly pronounced among females in a scenario where females were being discriminated against. Follow-up interviews corroborated our quantitative findings. These results can usefully inform explainable AI interface design.</p></div>\",\"PeriodicalId\":48422,\"journal\":{\"name\":\"International Journal of Information Management\",\"volume\":\"76 \",\"pages\":\"Article 102775\"},\"PeriodicalIF\":20.1000,\"publicationDate\":\"2024-03-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Management\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0268401224000239\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Management","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0268401224000239","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust
Biases in artificial intelligence (AI), a pressing issue in human-AI interaction, can be exacerbated by AI systems’ opaqueness. This paper reports on our development of a user-centered explainable-AI approach to reducing such opaqueness, guided by the theoretical framework of anthropomorphism and the results of two 3 × 3 between-subjects experiments (n = 207 and n = 223). Specifically, those experiments investigated how, in a gender-biased hiring situation, three levels of AI human-likeness (low, medium, high) and three levels of richness of AI explanation (none, lean, rich) influenced users’ 1) perceptions of AI bias and 2) adoption of AI’s recommendations, as well as how such perceptions and adoption varied across participant characteristics such as gender and pre-existing trust in AI. We found that comprehensive explanations helped users to recognize AI bias and mitigate its influence, and that this effect was particularly pronounced among females in a scenario where females were being discriminated against. Follow-up interviews corroborated our quantitative findings. These results can usefully inform explainable AI interface design.
期刊介绍:
The International Journal of Information Management (IJIM) is a distinguished, international, and peer-reviewed journal dedicated to providing its readers with top-notch analysis and discussions within the evolving field of information management. Key features of the journal include:
Comprehensive Coverage:
IJIM keeps readers informed with major papers, reports, and reviews.
Topical Relevance:
The journal remains current and relevant through Viewpoint articles and regular features like Research Notes, Case Studies, and a Reviews section, ensuring readers are updated on contemporary issues.
Focus on Quality:
IJIM prioritizes high-quality papers that address contemporary issues in information management.