让人工智能在南方国家变得可解释:一项系统回顾

Chinasa T. Okolo, Nicola Dell, Aditya Vashistha
{"title":"让人工智能在南方国家变得可解释:一项系统回顾","authors":"Chinasa T. Okolo, Nicola Dell, Aditya Vashistha","doi":"10.1145/3530190.3534802","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) and machine learning (ML) are quickly becoming pervasive in ways that impact the lives of all humans across the globe. In an effort to make otherwise ”black box” AI/ML systems more understandable, the field of Explainable AI (XAI) has arisen with the goal of developing algorithms, toolkits, frameworks, and other techniques that enable people to comprehend, trust, and manage AI systems. However, although XAI is a rapidly growing area of research, most of the work has focused on contexts in the Global North, and little is known about if or how XAI techniques have been designed, deployed, or tested with communities in the Global South. This gap is concerning, especially in light of rapidly growing enthusiasm from governments, companies, and academics to use AI/ML to “solve” problems in the Global South. Our paper contributes the first systematic review of XAI research in the Global South, providing an early look at emerging work in the space. We identified 16 papers from 15 different venues that targeted a wide range of application domains. All of the papers were published in the last three years. Of the 16 papers, 13 focused on applying a technical XAI method, all of which involved the use of (at least some) data that was local to the context. However, only three papers engaged with or involved humans in the work, and only one attempted to deploy their XAI system with target users. We close by reflecting on the current state of XAI research in the Global South, discussing data and model considerations for building and deploying XAI systems in these regions, and highlighting the need for human-centered approaches to XAI in the Global South.","PeriodicalId":257424,"journal":{"name":"ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Making AI Explainable in the Global South: A Systematic Review\",\"authors\":\"Chinasa T. Okolo, Nicola Dell, Aditya Vashistha\",\"doi\":\"10.1145/3530190.3534802\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) and machine learning (ML) are quickly becoming pervasive in ways that impact the lives of all humans across the globe. In an effort to make otherwise ”black box” AI/ML systems more understandable, the field of Explainable AI (XAI) has arisen with the goal of developing algorithms, toolkits, frameworks, and other techniques that enable people to comprehend, trust, and manage AI systems. However, although XAI is a rapidly growing area of research, most of the work has focused on contexts in the Global North, and little is known about if or how XAI techniques have been designed, deployed, or tested with communities in the Global South. This gap is concerning, especially in light of rapidly growing enthusiasm from governments, companies, and academics to use AI/ML to “solve” problems in the Global South. Our paper contributes the first systematic review of XAI research in the Global South, providing an early look at emerging work in the space. We identified 16 papers from 15 different venues that targeted a wide range of application domains. All of the papers were published in the last three years. Of the 16 papers, 13 focused on applying a technical XAI method, all of which involved the use of (at least some) data that was local to the context. However, only three papers engaged with or involved humans in the work, and only one attempted to deploy their XAI system with target users. We close by reflecting on the current state of XAI research in the Global South, discussing data and model considerations for building and deploying XAI systems in these regions, and highlighting the need for human-centered approaches to XAI in the Global South.\",\"PeriodicalId\":257424,\"journal\":{\"name\":\"ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3530190.3534802\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3530190.3534802","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

人工智能(AI)和机器学习(ML)正迅速普及,影响着全球所有人的生活。为了使“黑箱”AI/ML系统更易于理解,可解释AI (Explainable AI, XAI)领域应运而生,其目标是开发算法、工具包、框架和其他技术,使人们能够理解、信任和管理AI系统。然而,尽管XAI是一个快速发展的研究领域,但大多数工作都集中在全球北方的背景下,很少有人知道XAI技术是否或如何在全球南方的社区中设计、部署或测试。这一差距令人担忧,特别是考虑到政府、公司和学者使用人工智能/机器学习“解决”全球南方问题的热情迅速增长。我们的论文对全球南方的XAI研究进行了首次系统回顾,提供了对该领域新兴工作的早期观察。我们确定了来自15个不同地点的16篇论文,这些论文针对广泛的应用领域。所有的论文都是最近三年发表的。在这16篇论文中,有13篇聚焦于技术性XAI方法的应用,所有这些都涉及到使用(至少一些)本地上下文的数据。然而,只有三篇论文在工作中涉及到人类,只有一篇论文试图为目标用户部署他们的XAI系统。最后,我们回顾了全球发展中国家XAI研究的现状,讨论了在这些地区构建和部署XAI系统时需要考虑的数据和模型,并强调了在全球发展中国家采用以人为中心的XAI方法的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Making AI Explainable in the Global South: A Systematic Review
Artificial intelligence (AI) and machine learning (ML) are quickly becoming pervasive in ways that impact the lives of all humans across the globe. In an effort to make otherwise ”black box” AI/ML systems more understandable, the field of Explainable AI (XAI) has arisen with the goal of developing algorithms, toolkits, frameworks, and other techniques that enable people to comprehend, trust, and manage AI systems. However, although XAI is a rapidly growing area of research, most of the work has focused on contexts in the Global North, and little is known about if or how XAI techniques have been designed, deployed, or tested with communities in the Global South. This gap is concerning, especially in light of rapidly growing enthusiasm from governments, companies, and academics to use AI/ML to “solve” problems in the Global South. Our paper contributes the first systematic review of XAI research in the Global South, providing an early look at emerging work in the space. We identified 16 papers from 15 different venues that targeted a wide range of application domains. All of the papers were published in the last three years. Of the 16 papers, 13 focused on applying a technical XAI method, all of which involved the use of (at least some) data that was local to the context. However, only three papers engaged with or involved humans in the work, and only one attempted to deploy their XAI system with target users. We close by reflecting on the current state of XAI research in the Global South, discussing data and model considerations for building and deploying XAI systems in these regions, and highlighting the need for human-centered approaches to XAI in the Global South.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
NOTE: Unavoidable Service to Unnoticeable Risks: A Study on How Healthcare Record Management Opens the Doors of Unnoticeable Vulnerabilities for Rohingya Refugees Making AI Explainable in the Global South: A Systematic Review Note: A Sociomaterial Perspective on Trace Data Collection: Strategies for Democratizing and Limiting Bias Complexity of Factor Analysis for Particulate Matter (PM) Data: A Measurement Based Case Study in Delhi-NCR Note: Urbanization and Literacy as factors in Politicians’ Social Media Use in a largely Rural State: Evidence from Uttar Pradesh, India
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1