People’s experiences of discrimination are often shaped by multiple intersecting factors, yet algorithmic fairness research rarely reflects this complexity. While intersectionality offers tools for understanding how forms of oppression interact, current approaches to intersectional algorithmic fairness tend to focus on narrowly defined demographic subgroups. These methods contribute important insights but risk oversimplifying social reality and neglecting structural inequalities. In this paper, we outline how a substantive approach to intersectional algorithmic fairness can reorient this research and practice. In particular, we propose Substantive Intersectional Algorithmic Fairness, extending Ben Green’s (Philos Technol, 2022. https://doi.org/10.1007/s13347-022-00584-6 notion of substantive algorithmic fairness with insights from intersectional feminist theory. Aiming to provide as actionable guidance as possible, our approach is articulated as ten desiderata to guide the design, assessment, and deployment of algorithmic systems that address systemic inequities while mitigating harms to intersectionally marginalized communities. Rather than prescribing fixed operationalizations, these desiderata invite AI practitioners and experts to reflect on assumptions of neutrality, the use of protected attributes, the inclusion of multiply marginalized groups, and the transformative potential of algorithmic systems. By bridging computational and social science perspectives, the approach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non-deployment may be necessary.
扫码关注我们
求助内容:
应助结果提醒方式:
