{"title":"AI: Friend or foe of fairness perceptions of the tax administration? A survey experiment on citizens' procedural fairness perceptions","authors":"Anouk Decuypere, Anne Van de Vijver","doi":"10.1016/j.giq.2024.102002","DOIUrl":null,"url":null,"abstract":"<div><div>Governments are increasingly using AI for their decision making. Research on citizen perceptions highlight the context-dependent nature of their fairness assessment, rendering administrations unsure about how to implement AI so that citizens support these procedures. The survey experiments in this study, conducted in a pilot and a main study, (N<sub>pilot</sub> = 232; N<sub>main study</sub> = 2366) focuses on a high-risk decision-making context, i.e., selection of citizens for fraud detection. In the scenarios, we manipulated the proportion of the selection made by AI, based on information from past fraudsters, versus civil servants, who work based on their experience. In addition, we tested the effect of transparency (and explanation) statements and its impact on procedural fairness scores. We found that a higher proportion of AI in the selection for fraud audits was perceived as more procedurally fair, mostly through increased scores on bias suppression and consistency. However, participants' general attitude toward AI and trust in the administration explained more variance than the experimental manipulation. Transparency (explanations) had no impact.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 1","pages":"Article 102002"},"PeriodicalIF":7.8000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000947","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Governments are increasingly using AI for their decision making. Research on citizen perceptions highlight the context-dependent nature of their fairness assessment, rendering administrations unsure about how to implement AI so that citizens support these procedures. The survey experiments in this study, conducted in a pilot and a main study, (Npilot = 232; Nmain study = 2366) focuses on a high-risk decision-making context, i.e., selection of citizens for fraud detection. In the scenarios, we manipulated the proportion of the selection made by AI, based on information from past fraudsters, versus civil servants, who work based on their experience. In addition, we tested the effect of transparency (and explanation) statements and its impact on procedural fairness scores. We found that a higher proportion of AI in the selection for fraud audits was perceived as more procedurally fair, mostly through increased scores on bias suppression and consistency. However, participants' general attitude toward AI and trust in the administration explained more variance than the experimental manipulation. Transparency (explanations) had no impact.
期刊介绍:
Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.