Amidst concerns over biased and misguided government decisions arrived at through algorithmic treatment, it is important for members of society to be able to perceive that public authorities are making fair, accurate, and trustworthy decisions. Inspired in part by equity and procedural justice theories and by theories of attitudes towards technologies, we posited that the perception of these attributes of decisions is influenced by the type of explanation offered, which can be input-based, group-based, case-based, or counterfactual. We tested our hypotheses with two studies, each of which involved a pre-registered online survey experiment conducted in December 2022. In both studies, the subjects (N = 1200) were officers in high positions at stock companies registered in Japan, who were presented with a scenario consisting of an algorithmic decision made by a public authority: a ministry's decision to reject a grant application from their company (Study 1) and a tax authority's decision to select their company for an on-site tax inspection (Study 2). The studies revealed that offering the subjects some type of explanation had a positive effect on their attitude towards a decision, to various extents, although the detailed results of the two studies are not robust. These findings call for a nuanced inquiry, both in research and practice, into how to best design explanations of algorithmic decisions from societal and human-centric perspectives in different decision-making contexts.
This paper introduces systems theory and system safety concepts to ongoing academic debates about the safety of Machine Learning (ML) systems in the public sector. In particular, we analyze the risk factors of ML systems and their respective institutional context, which impact the ability to control such systems. We use interview data to abductively show what risk factors of such systems are present in public professionals' perceptions and what factors are expected based on systems theory but are missing. Based on the hypothesis that ML systems are best addressed with a systems theory lens, we argue that the missing factors deserve greater attention in ongoing efforts to address ML systems safety. These factors include the explication of safety goals and constraints, the inclusion of systemic factors in system design, the development of safety control structures, and the tendency of ML systems to migrate towards higher risk. Our observations support the hypothesis that ML systems can be best regarded through a systems theory lens. Therefore, we conclude that system safety concepts can be useful aids for policymakers who aim to improve ML system safety.
In the digital era, governance is undergoing a transformation, moving state–citizen engagement into online realms, where citizens serve as users and collaborators in shaping services and policies. Empowering citizens to act as social innovators on issues affecting their lives and local communities is the key to facilitate this transition. As interactions between the state and citizens become more convenient, governments are increasingly focusing on digital citizen empowerment (DCE) to improve the life of their populace. Our study aims to understand the different dimensions of DCE and how it leads to better participation. It also aims to study the role of people's perception towards accountability mechanisms in place and how they can pave the way to enhanced participation behaviour. Employing a mixed-method approach, the study utilises structural equation modelling to examine the relation among e-participation, DCE, and public and social accountability. The results conceptualise DCE, identifying its four dimensions: emotional, cognitive, relational, and behavioural. Furthermore, it underscores the significance of citizens' perceptions of governmental and social accountability in fostering e-participation. These findings are subsequently validated through a focus group discussion involving specialists from relevant fields. The results indicate that behavioural empowerment stands out as the most crucial aspect of DCE and that DCE enhances the quality of participation, with accountability mechanisms playing a pivotal role in achieving this outcome. Additionally, the findings reveal public disenchantment with e-government initiatives due to perceived administrative unresponsiveness. By pinpointing specific dimensions of individual empowerment, this study provides insights for policymakers to deliver accountable e-government services that promote enhanced e-participation.