{"title":"Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems","authors":"Cem Kozcuer, Anne Mollen, Felix Bießmann","doi":"10.1007/s11023-024-09663-3","DOIUrl":null,"url":null,"abstract":"<p>Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"54 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minds and Machines","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11023-024-09663-3","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.
有关机器学习(ML)公平性的研究主要集中在个人和群体公平性方面。随着基于 ML 的技术作为辅助技术被广泛应用于全球复杂的社会变革或危机局势中,这些现有的定义未能考虑到跨国算法公平性。我们建议用跨国算法公平的概念来补充现有的算法公平观点,并为建立分析框架迈出第一步。我们利用网络社交媒体中的图片对灾难响应系统进行了案例研究,以实例说明了跨国公平性评估的相关性。在介绍的案例中,ML 系统被用作一种支持工具,在灾难事件发生后对社交媒体中的图片进行分类和分级,作为协调灾难响应的几乎即时可用的信息来源。我们提出了一项实证分析,以国家社会人口发展指标作为潜在的歧视性属性,评估应用程序输出的跨国公平性。在此过程中,本文结合了数据分析、ML、数字媒体研究和媒体社会学等跨学科视角,以解决技术系统之外的公平性问题。所调查的案例研究反映了人们日常媒体使用的嵌入式视角,以及社交媒体平台作为社会性和数据处理的生产者--其相关性远远超出了灾难场景中的算法公平性案例。特别是考虑到人工智能(AI)的发展集中在全球北部地区,以及人们所认为的霸权格局,我们认为,跨国公平性提供了一个视角,来审视与人工智能发展和应用相关的全球不公平现象,并有可能通过确定数据和技术方面的差距来证实讨论。这些分析最终将使研究人员和政策制定者获得可操作的见解,从而缓解公平使用人工智能技术的现有问题,并降低与未来发展相关的风险。
期刊介绍:
Minds and Machines, affiliated with the Society for Machines and Mentality, serves as a platform for fostering critical dialogue between the AI and philosophical communities. With a focus on problems of shared interest, the journal actively encourages discussions on the philosophical aspects of computer science.
Offering a global forum, Minds and Machines provides a space to debate and explore important and contentious issues within its editorial focus. The journal presents special editions dedicated to specific topics, invites critical responses to previously published works, and features review essays addressing current problem scenarios.
By facilitating a diverse range of perspectives, Minds and Machines encourages a reevaluation of the status quo and the development of new insights. Through this collaborative approach, the journal aims to bridge the gap between AI and philosophy, fostering a tradition of critique and ensuring these fields remain connected and relevant.