How does collective action mitigate the negative outcomes or harm caused by algorithmic decision making (ADM) to recipients of its decision outputs? This study investigates this question, particularly considering the complexities of transparency and ADM operator reluctance to make changes. It applies a framing perspective drawn from the social movements literature to a longitudinal analysis of a case of governmental use of ADM. The study contributes to prior literature by revealing how ADM transparency, that is, understanding of ADM and its outcomes, can manifest at two levels: situational and systemic. Situational transparency frames understandings of how ADM operates in particular localized situations, and sees harm as deriving from how the system intersects with these specifics. Systemic transparency operates at an aggregated level, frames understandings of how ADM operates across social situations, and sees harm as inherent in the system itself. Both raise important and complementary questions about ADM systems and their effects. In addition, the study reveals how collective action mitigates harm through purposeful strategy combinations that develop transparency and achieve frame transformations that intensify pressure on operators to change. When ADM transparency at the situational level indicates harm, frame transformations that amplify normative pressures are likely to elicit harm-mitigating change unless ADM operators are resistant. In contrast, when ADM transparency at the systemic level reveals harm, frame transformations that create coercive pressures are required because these compel ADM operators to fundamentally redesign or abandon their systems despite the adverse impacts for them.
扫码关注我们
求助内容:
应助结果提醒方式:
