Ariana Guevara-Gómez, Lucía O. de Zárate-Alcarazo, J. I. Criado
Artificial Intelligence (AI) is a disruptive technology that has gained interest among scholars, politicians, public servants, and citizens. In the debates on its advantages and risks, issues related to gender have arisen. In some cases, AI approaches depict a tool to promote gender equality, and in others, a contribution to perpetuating discrimination and biases. We develop a theoretical and analytical framework, combining the literature on technological frames and gender theory to better understand the gender perspective of the nature, strategy, and use of AI in two institutional contexts. Our research question is: What are the assumptions, expectations and knowledge of the European Union institutions and Spanish government on AI regarding gender? Methodologically, we conducted a document analysis of 23 official documents about AI issued by the European Union (EU) and Spain to understand how they frame the gender perspective in their discourses. According to our analysis, despite both the EU and Spain have developed gender-sensitive AI policy frames, doubts remain about the definitions of key terms and the practical implementation of their discourses.
{"title":"Feminist perspectives to artificial intelligence: Comparing the policy frames of the European Union and Spain","authors":"Ariana Guevara-Gómez, Lucía O. de Zárate-Alcarazo, J. I. Criado","doi":"10.3233/ip-200299","DOIUrl":"https://doi.org/10.3233/ip-200299","url":null,"abstract":"Artificial Intelligence (AI) is a disruptive technology that has gained interest among scholars, politicians, public servants, and citizens. In the debates on its advantages and risks, issues related to gender have arisen. In some cases, AI approaches depict a tool to promote gender equality, and in others, a contribution to perpetuating discrimination and biases. We develop a theoretical and analytical framework, combining the literature on technological frames and gender theory to better understand the gender perspective of the nature, strategy, and use of AI in two institutional contexts. Our research question is: What are the assumptions, expectations and knowledge of the European Union institutions and Spanish government on AI regarding gender? Methodologically, we conducted a document analysis of 23 official documents about AI issued by the European Union (EU) and Spain to understand how they frame the gender perspective in their discourses. According to our analysis, despite both the EU and Spain have developed gender-sensitive AI policy frames, doubts remain about the definitions of key terms and the practical implementation of their discourses.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122426601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Civic technology is used to improve not only policies but to reinforce politics and has the potential to strengthen democracy. A search for new ways of involving citizens in decision-making processes combined with a growing smartphone penetration rate has generated expectations around smartphones as democratic tools. However, if civic applications do not meet citizens’ expectations and function poorly, they might remain unused and fail to increase interest in public issues. Therefore, there is a need to apply a citizen’s perspective on civic technology. The aim of this study is to gain knowledge about how citizens’ wishes and needs can be included in the design and evaluation process of a civic application. The study has an explorative approach and uses mixed methods. We analyze which democratic criteria citizens emphasize in a user-centered design process of a civic application by conducting focus groups and interviews. Moreover, a laboratory usability study measures how well two democratic criteria, inclusiveness and publicity, are met in an application. The results show that citizens do emphasize democratic criteria when participating in the design of a civic application. A user-centered design process will increase the likelihood of a usable application and can help fulfill the democratic criteria designers aim for.
{"title":"How do we know that it works? Designing a digital democratic innovation with the help of user-centered design","authors":"Janne Berg, Jenny Lindholm, Joachim Högväg","doi":"10.3233/IP-200282","DOIUrl":"https://doi.org/10.3233/IP-200282","url":null,"abstract":"Civic technology is used to improve not only policies but to reinforce politics and has the potential to strengthen democracy. A search for new ways of involving citizens in decision-making processes combined with a growing smartphone penetration rate has generated expectations around smartphones as democratic tools. However, if civic applications do not meet citizens’ expectations and function poorly, they might remain unused and fail to increase interest in public issues. Therefore, there is a need to apply a citizen’s perspective on civic technology. The aim of this study is to gain knowledge about how citizens’ wishes and needs can be included in the design and evaluation process of a civic application. The study has an explorative approach and uses mixed methods. We analyze which democratic criteria citizens emphasize in a user-centered design process of a civic application by conducting focus groups and interviews. Moreover, a laboratory usability study measures how well two democratic criteria, inclusiveness and publicity, are met in an application. The results show that citizens do emphasize democratic criteria when participating in the design of a civic application. A user-centered design process will increase the likelihood of a usable application and can help fulfill the democratic criteria designers aim for.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125877439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 pandemic has confronted society with a range of issues, dilemmas and challenges. One topic that has attracted considerable attention has been trust in science. Whilst a majority of people have shown great faith in scientific work and have applauded the arrival of a vaccine that has been realized through scientific endeavor, a significant minority has also challenged the opinions of scientists and the reliability of their research findings. This minority argues that scientists and their science is flawed, that it is biased and unsound, and captured by commercial and other interests. This minority has resisted the introduction of governmental measures based on scientific data and in doing so have challenged the legitimacy of government. The research that we publish in this journal has not stirred this level societal debate. But, at the same time, the question of trust in academic work is also playing an increasing role in our field. The erosion of trust in social science is more related to a series of high profile cases of academic fraud, often driven by a desire of ambitious individuals to perform well in an academic world that is increasingly focused on measurable metrics, such as the H-index (for some interesting analyses see: Budd, 2013; Butler et al., 2017). In some countries, there are even direct financial incentives connected to the publication of articles in highly ranked journals, and this in turn may encourage some scholars into bad scientific practices. In view of the need to maintain trust in science, a variety of measures have been proposed and are being implemented. More emphasis is being placed on ‘research integrity’ and some journals demand that research has been reviewed by an ethical board. There is a call for more ‘research transparency’ which translates into an obligation to make original datasets openly available so that others can check the reliability of the research processes and findings presented in an article. There is also an emphasis on providing transparency about the funding of research and whether those funding research may have shaped research outcomes. Increasingly, a number of journals are putting mechanisms in place to check whether co-authors have been actively involved in the generation of a manuscript and what that role has been. The range of formal measures being introduced by journals are understandable but they bring with them certain risks. The biggest risk is that the very measures that are intended to generate enhanced trust in academic work will actually perversely undermine this trust. The dynamic around trust has been analyzed comprehensively by Michael Power in his book exploring the ‘Audit Society’ (1997). Here, he argues that an increased emphasis on bureaucratic mechanisms to create trust can backfire since they are based on a starting point of mistrust. For the academic world, this could mean that the increased emphasis on openness and transparency will actually result in a climate
新冠肺炎疫情给社会带来了一系列问题、困境和挑战。有一个话题引起了相当大的关注,那就是对科学的信任。虽然大多数人对科学工作表现出极大的信心,并对通过科学努力实现的疫苗的到来表示欢迎,但也有相当一部分人对科学家的观点及其研究结果的可靠性提出质疑。这一小部分人认为科学家和他们的科学是有缺陷的,是有偏见和不可靠的,并且被商业和其他利益所俘获。这一小部分人抵制引入基于科学数据的政府措施,这样做挑战了政府的合法性。我们发表在这本杂志上的研究并没有引起这种程度的社会争论。但与此同时,学术工作中的信任问题也在我们的领域发挥着越来越大的作用。对社会科学信任的侵蚀更多地与一系列引人注目的学术欺诈案件有关,这些案件往往是由雄心勃勃的个人渴望在学术界取得好成绩所驱动的,而学术界越来越关注可衡量的指标,如h指数(一些有趣的分析见:Budd, 2013;Butler et al., 2017)。在一些国家,在高排名期刊上发表文章甚至有直接的经济奖励,这反过来可能会鼓励一些学者从事不良的科学实践。鉴于需要保持对科学的信任,已经提出并正在实施各种措施。人们更加强调“研究诚信”,一些期刊要求研究必须经过伦理委员会的审查。有人呼吁提高“研究透明度”,这意味着有义务公开提供原始数据集,以便其他人可以检查研究过程和文章中提出的发现的可靠性。还有一个重点是提供研究资助的透明度,以及这些资助的研究是否可能影响了研究成果。越来越多的期刊正在建立机制来检查共同作者是否积极参与了稿件的生成以及他们的角色。期刊引入的一系列正式措施是可以理解的,但它们也带来了一定的风险。最大的风险是,那些旨在增强学术工作信任的措施,实际上反而会破坏这种信任。迈克尔·鲍尔(Michael Power)在他的《探索审计社会》(1997)一书中对信任的动态进行了全面分析。在这里,他认为,越来越强调官僚机制来建立信任可能会适得其反,因为它们是基于不信任的起点。对于学术界来说,这可能意味着对开放性和透明度的日益强调实际上会导致一种气氛,在这种气氛中,几乎没有讨论科学如何真正运作以及研究人员如何处理他们在工作中遇到的困难的空间。正如Power所说,科学成果的正式报告将越来越与实际实践“脱钩”。
{"title":"Building trust in science: Facilitative rather than restrictive mechanisms","authors":"","doi":"10.3233/IP-219001","DOIUrl":"https://doi.org/10.3233/IP-219001","url":null,"abstract":"The COVID-19 pandemic has confronted society with a range of issues, dilemmas and challenges. One topic that has attracted considerable attention has been trust in science. Whilst a majority of people have shown great faith in scientific work and have applauded the arrival of a vaccine that has been realized through scientific endeavor, a significant minority has also challenged the opinions of scientists and the reliability of their research findings. This minority argues that scientists and their science is flawed, that it is biased and unsound, and captured by commercial and other interests. This minority has resisted the introduction of governmental measures based on scientific data and in doing so have challenged the legitimacy of government. The research that we publish in this journal has not stirred this level societal debate. But, at the same time, the question of trust in academic work is also playing an increasing role in our field. The erosion of trust in social science is more related to a series of high profile cases of academic fraud, often driven by a desire of ambitious individuals to perform well in an academic world that is increasingly focused on measurable metrics, such as the H-index (for some interesting analyses see: Budd, 2013; Butler et al., 2017). In some countries, there are even direct financial incentives connected to the publication of articles in highly ranked journals, and this in turn may encourage some scholars into bad scientific practices. In view of the need to maintain trust in science, a variety of measures have been proposed and are being implemented. More emphasis is being placed on ‘research integrity’ and some journals demand that research has been reviewed by an ethical board. There is a call for more ‘research transparency’ which translates into an obligation to make original datasets openly available so that others can check the reliability of the research processes and findings presented in an article. There is also an emphasis on providing transparency about the funding of research and whether those funding research may have shaped research outcomes. Increasingly, a number of journals are putting mechanisms in place to check whether co-authors have been actively involved in the generation of a manuscript and what that role has been. The range of formal measures being introduced by journals are understandable but they bring with them certain risks. The biggest risk is that the very measures that are intended to generate enhanced trust in academic work will actually perversely undermine this trust. The dynamic around trust has been analyzed comprehensively by Michael Power in his book exploring the ‘Audit Society’ (1997). Here, he argues that an increased emphasis on bureaucratic mechanisms to create trust can backfire since they are based on a starting point of mistrust. For the academic world, this could mean that the increased emphasis on openness and transparency will actually result in a climate","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130033721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this paper is to study over- and under representational practices in governmental expert advisory groups on digitalization to open up a dialogue on translations of digitalization. By uncovering how meanings converge and interpretations associated with technology are stabilized or maybe even closed, this research is positioned within a critical research tradition. The chosen analytical framework stretches from technological culture (i.e., how and where the myths and symbolic narratives are constructed), and a focus on the process of interpretation (i.e., the flexibility in how digitalization could be translated and attached to different political goals and values) to a dimension of firstness (addressing education, professional experiences and geographical position to explore dominance and power aspects). The results reveal a homogeneity that is potentially problematic and raises questions about the frames for interpreting what digitalization could and should be and do. We argue that the strong placement of digitalization in the knowledge base disclosed in this study hinders digitalization from being more knowledgeably translated.
{"title":"Undisclosed creators of digitalization: A critical analysis of representational practices","authors":"Katarina Lindblad-Gidlund, Leif Sundberg","doi":"10.3233/IP-200230","DOIUrl":"https://doi.org/10.3233/IP-200230","url":null,"abstract":"The aim of this paper is to study over- and under representational practices in governmental expert advisory groups on digitalization to open up a dialogue on translations of digitalization. By uncovering how meanings converge and interpretations associated with technology are stabilized or maybe even closed, this research is positioned within a critical research tradition. The chosen analytical framework stretches from technological culture (i.e., how and where the myths and symbolic narratives are constructed), and a focus on the process of interpretation (i.e., the flexibility in how digitalization could be translated and attached to different political goals and values) to a dimension of firstness (addressing education, professional experiences and geographical position to explore dominance and power aspects). The results reveal a homogeneity that is potentially problematic and raises questions about the frames for interpreting what digitalization could and should be and do. We argue that the strong placement of digitalization in the knowledge base disclosed in this study hinders digitalization from being more knowledgeably translated.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126592625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Keen, R. Ruddle, Jan Palczewski, G. Aivaliotis, Anna Palczewska, C. Megone, Kevin Macnish
There is a widespread belief that machine learning tools can be used to improve decision-making in health and social care. At the same time, there are concerns that they pose threats to privacy and confidentiality. Policy makers therefore need to develop governance arrangements that balance benefits and risks associated with the new tools. This article traces the history of developments of information infrastructures for secondary uses of personal datasets, including routine reporting of activity and service planning, in health and social care. The developments provide broad context for a study of the governance implications of new tools for the analysis of health and social care datasets. We find that machine learning tools can increase the capacity to make inferences about the people represented in datasets, although the potential is limited by the poor quality of routine data, and the methods and results are difficult to explain to other stakeholders. We argue that current local governance arrangements are piecemeal, but at the same time reinforce centralisation of the capacity to make inferences about individuals and populations. They do not provide adequate oversight, or accountability to the patients and clients represented in datasets.
{"title":"Machine learning, materiality and governance: A health and social care case study","authors":"J. Keen, R. Ruddle, Jan Palczewski, G. Aivaliotis, Anna Palczewska, C. Megone, Kevin Macnish","doi":"10.3233/ip-200264","DOIUrl":"https://doi.org/10.3233/ip-200264","url":null,"abstract":"There is a widespread belief that machine learning tools can be used to improve decision-making in health and social care. At the same time, there are concerns that they pose threats to privacy and confidentiality. Policy makers therefore need to develop governance arrangements that balance benefits and risks associated with the new tools. This article traces the history of developments of information infrastructures for secondary uses of personal datasets, including routine reporting of activity and service planning, in health and social care. The developments provide broad context for a study of the governance implications of new tools for the analysis of health and social care datasets. We find that machine learning tools can increase the capacity to make inferences about the people represented in datasets, although the potential is limited by the poor quality of routine data, and the methods and results are difficult to explain to other stakeholders. We argue that current local governance arrangements are piecemeal, but at the same time reinforce centralisation of the capacity to make inferences about individuals and populations. They do not provide adequate oversight, or accountability to the patients and clients represented in datasets.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130745356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by the classic work of Max Weber, this study develops an ideal type to study the transformation of government bureaucracy in the ‘age of algorithms’. We present the new ideal type – the algocracy – and position this vis-à-vis three other ideal types (machine bureaucracy, professional bureaucracy, infocracy). We show that while the infocracy uses technology to improve the machine bureaucracy, the algocracy automates the professional bureaucracy. By reducing and quantifying the uncertainty of decision-making processes in organizations the algocracy rationalizes the exercise of rational-legal authority in the professional bureaucracy. To test the value of the ideal type, we use it to analyze the introduction of a predictive policing system in the Berlin police. Our empirical analysis confirms the value of the algocracy as a lens to study empirical practices: the study highlights how the KrimPro system conditions professional assessments and centralizes control over complex police processes. This research therefore positions the algocracy in the heart of discussions about the future of the public sector and presents an agenda for further research.
{"title":"The algocracy as a new ideal type for government organizations: Predictive policing in Berlin as an empirical case","authors":"L. Lorenz, A. Meijer, Tino Schuppan","doi":"10.3233/ip-200279","DOIUrl":"https://doi.org/10.3233/ip-200279","url":null,"abstract":"Motivated by the classic work of Max Weber, this study develops an ideal type to study the transformation of government bureaucracy in the ‘age of algorithms’. We present the new ideal type – the algocracy – and position this vis-à-vis three other ideal types (machine bureaucracy, professional bureaucracy, infocracy). We show that while the infocracy uses technology to improve the machine bureaucracy, the algocracy automates the professional bureaucracy. By reducing and quantifying the uncertainty of decision-making processes in organizations the algocracy rationalizes the exercise of rational-legal authority in the professional bureaucracy. To test the value of the ideal type, we use it to analyze the introduction of a predictive policing system in the Berlin police. Our empirical analysis confirms the value of the algocracy as a lens to study empirical practices: the study highlights how the KrimPro system conditions professional assessments and centralizes control over complex police processes. This research therefore positions the algocracy in the heart of discussions about the future of the public sector and presents an agenda for further research.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133886874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research at the intersection of feminist organizational theory and techno-science scholarship notes the importance of gender in technology design, adoption, implementation, and use within organizations and how technology in the workplace shapes and is shaped by gender. While governments are committed to advancing gender equity in the workplace, feminist theory is rarely applied to the analysis of the use, adoption, and implementation of technology in government settings from the perspective of public managers and employees. In this paper, we argue that e-government research and practice can benefit from drawing from three streams of feminist research: 1) studying gender as a social construct, 2) researching gender bias in data, technology use, and design, and 3) assessing gendered representation in technology management. Drawing from feminist research, we offer six propositions and several research questions for advancing research on e-government and gender in public sector workplaces.
{"title":"A critical analysis of the study of gender and technology in government","authors":"Mary K. Feeney, Federica Fusi","doi":"10.2139/SSRN.3786174","DOIUrl":"https://doi.org/10.2139/SSRN.3786174","url":null,"abstract":"Research at the intersection of feminist organizational theory and techno-science scholarship notes the importance of gender in technology design, adoption, implementation, and use within organizations and how technology in the workplace shapes and is shaped by gender. While governments are committed to advancing gender equity in the workplace, feminist theory is rarely applied to the analysis of the use, adoption, and implementation of technology in government settings from the perspective of public managers and employees. In this paper, we argue that e-government research and practice can benefit from drawing from three streams of feminist research: 1) studying gender as a social construct, 2) researching gender bias in data, technology use, and design, and 3) assessing gendered representation in technology management. Drawing from feminist research, we offer six propositions and several research questions for advancing research on e-government and gender in public sector workplaces.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128924542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.
{"title":"Administration by algorithm: A risk management framework","authors":"F. Bannister, R. Connolly","doi":"10.3233/ip-200249","DOIUrl":"https://doi.org/10.3233/ip-200249","url":null,"abstract":"Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"C-19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article examines the relationship between Artificial Intelligence (AI), discretion, and bureaucratic form in public organizations. We ask: How is the use of AI both changing and changed by the bureaucratic form of public organizations, and what effect does this have on the use of discretion? The diffusion of information and communication technologies (ICTs) has changed administrative behavior in public organizations. Recent advances in AI have led to its increasing use, but too little is known about the relationship between this distinct form of ICT and to both the exercise of discretion and bureaucratic form along the continuum from street- to system-levels. We articulate a theoretical framework that integrates work on the unique effects of AI on discretion and its relationship to task and organizational context with the theory of system-level bureaucracy. We use this framework to examine two strongly differing cases of public sector AI use: health insurance auditing, and policing. We find AI’s effect on discretion is nonlinear and nonmonotonic as a function of bureaucratic form. At the same time, the use of AI may act as an accelerant in transitioning organizations from street- and screen-level to system-level bureaucracies, even if these organizations previously resisted such changes.
{"title":"Artificial intelligence, bureaucratic form, and discretion in public service","authors":"Justin B. Bullock, Matthew M. Young, Yi-Fan Wang","doi":"10.3233/ip-200223","DOIUrl":"https://doi.org/10.3233/ip-200223","url":null,"abstract":"This article examines the relationship between Artificial Intelligence (AI), discretion, and bureaucratic form in public organizations. We ask: How is the use of AI both changing and changed by the bureaucratic form of public organizations, and what effect does this have on the use of discretion? The diffusion of information and communication technologies (ICTs) has changed administrative behavior in public organizations. Recent advances in AI have led to its increasing use, but too little is known about the relationship between this distinct form of ICT and to both the exercise of discretion and bureaucratic form along the continuum from street- to system-levels. We articulate a theoretical framework that integrates work on the unique effects of AI on discretion and its relationship to task and organizational context with the theory of system-level bureaucracy. We use this framework to examine two strongly differing cases of public sector AI use: health insurance auditing, and policing. We find AI’s effect on discretion is nonlinear and nonmonotonic as a function of bureaucratic form. At the same time, the use of AI may act as an accelerant in transitioning organizations from street- and screen-level to system-level bureaucracies, even if these organizations previously resisted such changes.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133540272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}