首页 > 最新文献

Inf. Polity最新文献

英文 中文
Feminist perspectives to artificial intelligence: Comparing the policy frames of the European Union and Spain 人工智能的女权主义视角:比较欧盟和西班牙的政策框架
Pub Date : 2021-06-03 DOI: 10.3233/ip-200299
Ariana Guevara-Gómez, Lucía O. de Zárate-Alcarazo, J. I. Criado
Artificial Intelligence (AI) is a disruptive technology that has gained interest among scholars, politicians, public servants, and citizens. In the debates on its advantages and risks, issues related to gender have arisen. In some cases, AI approaches depict a tool to promote gender equality, and in others, a contribution to perpetuating discrimination and biases. We develop a theoretical and analytical framework, combining the literature on technological frames and gender theory to better understand the gender perspective of the nature, strategy, and use of AI in two institutional contexts. Our research question is: What are the assumptions, expectations and knowledge of the European Union institutions and Spanish government on AI regarding gender? Methodologically, we conducted a document analysis of 23 official documents about AI issued by the European Union (EU) and Spain to understand how they frame the gender perspective in their discourses. According to our analysis, despite both the EU and Spain have developed gender-sensitive AI policy frames, doubts remain about the definitions of key terms and the practical implementation of their discourses.
人工智能(AI)是备受学者、政治家、公务员、市民关注的颠覆性技术。在关于其优势和风险的辩论中,出现了与性别有关的问题。在某些情况下,人工智能方法描述了一种促进性别平等的工具,而在其他情况下,则助长了歧视和偏见的延续。我们开发了一个理论和分析框架,结合技术框架和性别理论的文献,以更好地理解人工智能在两种制度背景下的性质、策略和使用的性别视角。我们的研究问题是:欧盟机构和西班牙政府对人工智能在性别方面的假设、期望和认识是什么?在方法上,我们对欧盟(EU)和西班牙发布的23份关于人工智能的官方文件进行了文件分析,以了解它们如何在其话语中构建性别视角。根据我们的分析,尽管欧盟和西班牙都制定了对性别敏感的人工智能政策框架,但对关键术语的定义及其话语的实际执行仍然存在疑问。
{"title":"Feminist perspectives to artificial intelligence: Comparing the policy frames of the European Union and Spain","authors":"Ariana Guevara-Gómez, Lucía O. de Zárate-Alcarazo, J. I. Criado","doi":"10.3233/ip-200299","DOIUrl":"https://doi.org/10.3233/ip-200299","url":null,"abstract":"Artificial Intelligence (AI) is a disruptive technology that has gained interest among scholars, politicians, public servants, and citizens. In the debates on its advantages and risks, issues related to gender have arisen. In some cases, AI approaches depict a tool to promote gender equality, and in others, a contribution to perpetuating discrimination and biases. We develop a theoretical and analytical framework, combining the literature on technological frames and gender theory to better understand the gender perspective of the nature, strategy, and use of AI in two institutional contexts. Our research question is: What are the assumptions, expectations and knowledge of the European Union institutions and Spanish government on AI regarding gender? Methodologically, we conducted a document analysis of 23 official documents about AI issued by the European Union (EU) and Spain to understand how they frame the gender perspective in their discourses. According to our analysis, despite both the EU and Spain have developed gender-sensitive AI policy frames, doubts remain about the definitions of key terms and the practical implementation of their discourses.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122426601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How do we know that it works? Designing a digital democratic innovation with the help of user-centered design 我们怎么知道它是否有效?在以用户为中心的设计的帮助下设计一个数字民主创新
Pub Date : 2021-04-16 DOI: 10.3233/IP-200282
Janne Berg, Jenny Lindholm, Joachim Högväg
Civic technology is used to improve not only policies but to reinforce politics and has the potential to strengthen democracy. A search for new ways of involving citizens in decision-making processes combined with a growing smartphone penetration rate has generated expectations around smartphones as democratic tools. However, if civic applications do not meet citizens’ expectations and function poorly, they might remain unused and fail to increase interest in public issues. Therefore, there is a need to apply a citizen’s perspective on civic technology. The aim of this study is to gain knowledge about how citizens’ wishes and needs can be included in the design and evaluation process of a civic application. The study has an explorative approach and uses mixed methods. We analyze which democratic criteria citizens emphasize in a user-centered design process of a civic application by conducting focus groups and interviews. Moreover, a laboratory usability study measures how well two democratic criteria, inclusiveness and publicity, are met in an application. The results show that citizens do emphasize democratic criteria when participating in the design of a civic application. A user-centered design process will increase the likelihood of a usable application and can help fulfill the democratic criteria designers aim for.
公民技术不仅用于改善政策,还用于加强政治,并具有加强民主的潜力。寻找让公民参与决策过程的新途径,再加上智能手机普及率的不断提高,人们对智能手机作为民主工具产生了期望。然而,如果公民应用程序不能满足公民的期望,功能不佳,它们可能会被闲置,无法提高公众对公共问题的兴趣。因此,有必要运用公民的视角来看待公民技术。本研究的目的是了解如何将公民的愿望和需求纳入公民申请的设计和评估过程。本研究采用探索性方法和混合方法。我们通过进行焦点小组和访谈来分析公民在以用户为中心的公民应用程序设计过程中强调的民主标准。此外,一项实验室可用性研究衡量了在应用程序中满足包容性和公共性这两个民主标准的程度。结果表明,公民在参与公民应用程序的设计时确实强调民主标准。以用户为中心的设计过程将增加可用应用程序的可能性,并有助于实现设计师所追求的民主标准。
{"title":"How do we know that it works? Designing a digital democratic innovation with the help of user-centered design","authors":"Janne Berg, Jenny Lindholm, Joachim Högväg","doi":"10.3233/IP-200282","DOIUrl":"https://doi.org/10.3233/IP-200282","url":null,"abstract":"Civic technology is used to improve not only policies but to reinforce politics and has the potential to strengthen democracy. A search for new ways of involving citizens in decision-making processes combined with a growing smartphone penetration rate has generated expectations around smartphones as democratic tools. However, if civic applications do not meet citizens’ expectations and function poorly, they might remain unused and fail to increase interest in public issues. Therefore, there is a need to apply a citizen’s perspective on civic technology. The aim of this study is to gain knowledge about how citizens’ wishes and needs can be included in the design and evaluation process of a civic application. The study has an explorative approach and uses mixed methods. We analyze which democratic criteria citizens emphasize in a user-centered design process of a civic application by conducting focus groups and interviews. Moreover, a laboratory usability study measures how well two democratic criteria, inclusiveness and publicity, are met in an application. The results show that citizens do emphasize democratic criteria when participating in the design of a civic application. A user-centered design process will increase the likelihood of a usable application and can help fulfill the democratic criteria designers aim for.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125877439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Building trust in science: Facilitative rather than restrictive mechanisms 建立对科学的信任:促进机制而非限制机制
Pub Date : 2021-02-22 DOI: 10.3233/IP-219001
The COVID-19 pandemic has confronted society with a range of issues, dilemmas and challenges. One topic that has attracted considerable attention has been trust in science. Whilst a majority of people have shown great faith in scientific work and have applauded the arrival of a vaccine that has been realized through scientific endeavor, a significant minority has also challenged the opinions of scientists and the reliability of their research findings. This minority argues that scientists and their science is flawed, that it is biased and unsound, and captured by commercial and other interests. This minority has resisted the introduction of governmental measures based on scientific data and in doing so have challenged the legitimacy of government. The research that we publish in this journal has not stirred this level societal debate. But, at the same time, the question of trust in academic work is also playing an increasing role in our field. The erosion of trust in social science is more related to a series of high profile cases of academic fraud, often driven by a desire of ambitious individuals to perform well in an academic world that is increasingly focused on measurable metrics, such as the H-index (for some interesting analyses see: Budd, 2013; Butler et al., 2017). In some countries, there are even direct financial incentives connected to the publication of articles in highly ranked journals, and this in turn may encourage some scholars into bad scientific practices. In view of the need to maintain trust in science, a variety of measures have been proposed and are being implemented. More emphasis is being placed on ‘research integrity’ and some journals demand that research has been reviewed by an ethical board. There is a call for more ‘research transparency’ which translates into an obligation to make original datasets openly available so that others can check the reliability of the research processes and findings presented in an article. There is also an emphasis on providing transparency about the funding of research and whether those funding research may have shaped research outcomes. Increasingly, a number of journals are putting mechanisms in place to check whether co-authors have been actively involved in the generation of a manuscript and what that role has been. The range of formal measures being introduced by journals are understandable but they bring with them certain risks. The biggest risk is that the very measures that are intended to generate enhanced trust in academic work will actually perversely undermine this trust. The dynamic around trust has been analyzed comprehensively by Michael Power in his book exploring the ‘Audit Society’ (1997). Here, he argues that an increased emphasis on bureaucratic mechanisms to create trust can backfire since they are based on a starting point of mistrust. For the academic world, this could mean that the increased emphasis on openness and transparency will actually result in a climate
新冠肺炎疫情给社会带来了一系列问题、困境和挑战。有一个话题引起了相当大的关注,那就是对科学的信任。虽然大多数人对科学工作表现出极大的信心,并对通过科学努力实现的疫苗的到来表示欢迎,但也有相当一部分人对科学家的观点及其研究结果的可靠性提出质疑。这一小部分人认为科学家和他们的科学是有缺陷的,是有偏见和不可靠的,并且被商业和其他利益所俘获。这一小部分人抵制引入基于科学数据的政府措施,这样做挑战了政府的合法性。我们发表在这本杂志上的研究并没有引起这种程度的社会争论。但与此同时,学术工作中的信任问题也在我们的领域发挥着越来越大的作用。对社会科学信任的侵蚀更多地与一系列引人注目的学术欺诈案件有关,这些案件往往是由雄心勃勃的个人渴望在学术界取得好成绩所驱动的,而学术界越来越关注可衡量的指标,如h指数(一些有趣的分析见:Budd, 2013;Butler et al., 2017)。在一些国家,在高排名期刊上发表文章甚至有直接的经济奖励,这反过来可能会鼓励一些学者从事不良的科学实践。鉴于需要保持对科学的信任,已经提出并正在实施各种措施。人们更加强调“研究诚信”,一些期刊要求研究必须经过伦理委员会的审查。有人呼吁提高“研究透明度”,这意味着有义务公开提供原始数据集,以便其他人可以检查研究过程和文章中提出的发现的可靠性。还有一个重点是提供研究资助的透明度,以及这些资助的研究是否可能影响了研究成果。越来越多的期刊正在建立机制来检查共同作者是否积极参与了稿件的生成以及他们的角色。期刊引入的一系列正式措施是可以理解的,但它们也带来了一定的风险。最大的风险是,那些旨在增强学术工作信任的措施,实际上反而会破坏这种信任。迈克尔·鲍尔(Michael Power)在他的《探索审计社会》(1997)一书中对信任的动态进行了全面分析。在这里,他认为,越来越强调官僚机制来建立信任可能会适得其反,因为它们是基于不信任的起点。对于学术界来说,这可能意味着对开放性和透明度的日益强调实际上会导致一种气氛,在这种气氛中,几乎没有讨论科学如何真正运作以及研究人员如何处理他们在工作中遇到的困难的空间。正如Power所说,科学成果的正式报告将越来越与实际实践“脱钩”。
{"title":"Building trust in science: Facilitative rather than restrictive mechanisms","authors":"","doi":"10.3233/IP-219001","DOIUrl":"https://doi.org/10.3233/IP-219001","url":null,"abstract":"The COVID-19 pandemic has confronted society with a range of issues, dilemmas and challenges. One topic that has attracted considerable attention has been trust in science. Whilst a majority of people have shown great faith in scientific work and have applauded the arrival of a vaccine that has been realized through scientific endeavor, a significant minority has also challenged the opinions of scientists and the reliability of their research findings. This minority argues that scientists and their science is flawed, that it is biased and unsound, and captured by commercial and other interests. This minority has resisted the introduction of governmental measures based on scientific data and in doing so have challenged the legitimacy of government. The research that we publish in this journal has not stirred this level societal debate. But, at the same time, the question of trust in academic work is also playing an increasing role in our field. The erosion of trust in social science is more related to a series of high profile cases of academic fraud, often driven by a desire of ambitious individuals to perform well in an academic world that is increasingly focused on measurable metrics, such as the H-index (for some interesting analyses see: Budd, 2013; Butler et al., 2017). In some countries, there are even direct financial incentives connected to the publication of articles in highly ranked journals, and this in turn may encourage some scholars into bad scientific practices. In view of the need to maintain trust in science, a variety of measures have been proposed and are being implemented. More emphasis is being placed on ‘research integrity’ and some journals demand that research has been reviewed by an ethical board. There is a call for more ‘research transparency’ which translates into an obligation to make original datasets openly available so that others can check the reliability of the research processes and findings presented in an article. There is also an emphasis on providing transparency about the funding of research and whether those funding research may have shaped research outcomes. Increasingly, a number of journals are putting mechanisms in place to check whether co-authors have been actively involved in the generation of a manuscript and what that role has been. The range of formal measures being introduced by journals are understandable but they bring with them certain risks. The biggest risk is that the very measures that are intended to generate enhanced trust in academic work will actually perversely undermine this trust. The dynamic around trust has been analyzed comprehensively by Michael Power in his book exploring the ‘Audit Society’ (1997). Here, he argues that an increased emphasis on bureaucratic mechanisms to create trust can backfire since they are based on a starting point of mistrust. For the academic world, this could mean that the increased emphasis on openness and transparency will actually result in a climate","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130033721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Undisclosed creators of digitalization: A critical analysis of representational practices 未披露的数字化创造者:对代表性实践的批判性分析
Pub Date : 2021-02-22 DOI: 10.3233/IP-200230
Katarina Lindblad-Gidlund, Leif Sundberg
The aim of this paper is to study over- and under representational practices in governmental expert advisory groups on digitalization to open up a dialogue on translations of digitalization. By uncovering how meanings converge and interpretations associated with technology are stabilized or maybe even closed, this research is positioned within a critical research tradition. The chosen analytical framework stretches from technological culture (i.e., how and where the myths and symbolic narratives are constructed), and a focus on the process of interpretation (i.e., the flexibility in how digitalization could be translated and attached to different political goals and values) to a dimension of firstness (addressing education, professional experiences and geographical position to explore dominance and power aspects). The results reveal a homogeneity that is potentially problematic and raises questions about the frames for interpreting what digitalization could and should be and do. We argue that the strong placement of digitalization in the knowledge base disclosed in this study hinders digitalization from being more knowledgeably translated.
本文的目的是研究政府数字化专家咨询小组的代表性实践,以开启数字化翻译的对话。通过揭示与技术相关的意义如何趋同和解释是稳定的甚至是封闭的,本研究定位于批判性研究传统。所选择的分析框架从技术文化(即神话和象征性叙事是如何以及在哪里构建的),以及对解释过程的关注(即数字化如何被翻译并附着于不同政治目标和价值观的灵活性)延伸到第一维度(解决教育,专业经验和地理位置,以探索支配地位和权力方面)。结果揭示了一种同质性,这可能是有问题的,并提出了关于解释数字化可以、应该是什么和应该做什么的框架的问题。我们认为,在本研究中披露的知识库中,数字化的强大位置阻碍了数字化被更明智地翻译。
{"title":"Undisclosed creators of digitalization: A critical analysis of representational practices","authors":"Katarina Lindblad-Gidlund, Leif Sundberg","doi":"10.3233/IP-200230","DOIUrl":"https://doi.org/10.3233/IP-200230","url":null,"abstract":"The aim of this paper is to study over- and under representational practices in governmental expert advisory groups on digitalization to open up a dialogue on translations of digitalization. By uncovering how meanings converge and interpretations associated with technology are stabilized or maybe even closed, this research is positioned within a critical research tradition. The chosen analytical framework stretches from technological culture (i.e., how and where the myths and symbolic narratives are constructed), and a focus on the process of interpretation (i.e., the flexibility in how digitalization could be translated and attached to different political goals and values) to a dimension of firstness (addressing education, professional experiences and geographical position to explore dominance and power aspects). The results reveal a homogeneity that is potentially problematic and raises questions about the frames for interpreting what digitalization could and should be and do. We argue that the strong placement of digitalization in the knowledge base disclosed in this study hinders digitalization from being more knowledgeably translated.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126592625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Machine learning, materiality and governance: A health and social care case study 机器学习、重要性和治理:健康和社会护理案例研究
Pub Date : 2021-02-22 DOI: 10.3233/ip-200264
J. Keen, R. Ruddle, Jan Palczewski, G. Aivaliotis, Anna Palczewska, C. Megone, Kevin Macnish
There is a widespread belief that machine learning tools can be used to improve decision-making in health and social care. At the same time, there are concerns that they pose threats to privacy and confidentiality. Policy makers therefore need to develop governance arrangements that balance benefits and risks associated with the new tools. This article traces the history of developments of information infrastructures for secondary uses of personal datasets, including routine reporting of activity and service planning, in health and social care. The developments provide broad context for a study of the governance implications of new tools for the analysis of health and social care datasets. We find that machine learning tools can increase the capacity to make inferences about the people represented in datasets, although the potential is limited by the poor quality of routine data, and the methods and results are difficult to explain to other stakeholders. We argue that current local governance arrangements are piecemeal, but at the same time reinforce centralisation of the capacity to make inferences about individuals and populations. They do not provide adequate oversight, or accountability to the patients and clients represented in datasets.
人们普遍认为,机器学习工具可以用来改善卫生和社会保健方面的决策。与此同时,有人担心它们会对隐私和机密性构成威胁。因此,政策制定者需要制定治理安排,平衡与新工具相关的利益和风险。本文追溯了二级使用个人数据集的信息基础设施的发展历史,包括卫生和社会保健领域活动和服务规划的例行报告。这些发展为研究卫生和社会保健数据集分析新工具对治理的影响提供了广泛的背景。我们发现机器学习工具可以提高对数据集中所代表的人进行推断的能力,尽管这种潜力受到常规数据质量差的限制,而且方法和结果很难向其他利益相关者解释。我们认为,目前的地方治理安排是零碎的,但同时加强了对个人和人口进行推断的能力的集中。他们没有对数据集中的患者和客户提供足够的监督或问责。
{"title":"Machine learning, materiality and governance: A health and social care case study","authors":"J. Keen, R. Ruddle, Jan Palczewski, G. Aivaliotis, Anna Palczewska, C. Megone, Kevin Macnish","doi":"10.3233/ip-200264","DOIUrl":"https://doi.org/10.3233/ip-200264","url":null,"abstract":"There is a widespread belief that machine learning tools can be used to improve decision-making in health and social care. At the same time, there are concerns that they pose threats to privacy and confidentiality. Policy makers therefore need to develop governance arrangements that balance benefits and risks associated with the new tools. This article traces the history of developments of information infrastructures for secondary uses of personal datasets, including routine reporting of activity and service planning, in health and social care. The developments provide broad context for a study of the governance implications of new tools for the analysis of health and social care datasets. We find that machine learning tools can increase the capacity to make inferences about the people represented in datasets, although the potential is limited by the poor quality of routine data, and the methods and results are difficult to explain to other stakeholders. We argue that current local governance arrangements are piecemeal, but at the same time reinforce centralisation of the capacity to make inferences about individuals and populations. They do not provide adequate oversight, or accountability to the patients and clients represented in datasets.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130745356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The algocracy as a new ideal type for government organizations: Predictive policing in Berlin as an empirical case 作为政府组织新理想类型的寡头政治:柏林预测性警务的实证案例
Pub Date : 2021-02-22 DOI: 10.3233/ip-200279
L. Lorenz, A. Meijer, Tino Schuppan
Motivated by the classic work of Max Weber, this study develops an ideal type to study the transformation of government bureaucracy in the ‘age of algorithms’. We present the new ideal type – the algocracy – and position this vis-à-vis three other ideal types (machine bureaucracy, professional bureaucracy, infocracy). We show that while the infocracy uses technology to improve the machine bureaucracy, the algocracy automates the professional bureaucracy. By reducing and quantifying the uncertainty of decision-making processes in organizations the algocracy rationalizes the exercise of rational-legal authority in the professional bureaucracy. To test the value of the ideal type, we use it to analyze the introduction of a predictive policing system in the Berlin police. Our empirical analysis confirms the value of the algocracy as a lens to study empirical practices: the study highlights how the KrimPro system conditions professional assessments and centralizes control over complex police processes. This research therefore positions the algocracy in the heart of discussions about the future of the public sector and presents an agenda for further research.
在韦伯经典著作的激励下,本研究发展了一个理想的类型来研究“算法时代”的政府官僚转型。我们提出了新的理想类型-算法官僚-并将其与-à-vis其他三种理想类型(机器官僚,专业官僚,信息官僚)进行比较。我们表明,当信息官僚主义使用技术来改进机器官僚主义时,算法官僚主义使专业官僚主义自动化。通过减少和量化组织中决策过程的不确定性,算法官僚使专业官僚中理性-法律权威的行使合理化。为了检验理想类型的价值,我们用它来分析柏林警方引入预测性警务系统的情况。我们的实证分析证实了算法政治作为研究实证实践的一个镜头的价值:该研究强调了KrimPro系统如何对专业评估进行条件调节,并对复杂的警察流程进行集中控制。因此,本研究将算法政治置于有关公共部门未来讨论的核心位置,并提出了进一步研究的议程。
{"title":"The algocracy as a new ideal type for government organizations: Predictive policing in Berlin as an empirical case","authors":"L. Lorenz, A. Meijer, Tino Schuppan","doi":"10.3233/ip-200279","DOIUrl":"https://doi.org/10.3233/ip-200279","url":null,"abstract":"Motivated by the classic work of Max Weber, this study develops an ideal type to study the transformation of government bureaucracy in the ‘age of algorithms’. We present the new ideal type – the algocracy – and position this vis-à-vis three other ideal types (machine bureaucracy, professional bureaucracy, infocracy). We show that while the infocracy uses technology to improve the machine bureaucracy, the algocracy automates the professional bureaucracy. By reducing and quantifying the uncertainty of decision-making processes in organizations the algocracy rationalizes the exercise of rational-legal authority in the professional bureaucracy. To test the value of the ideal type, we use it to analyze the introduction of a predictive policing system in the Berlin police. Our empirical analysis confirms the value of the algocracy as a lens to study empirical practices: the study highlights how the KrimPro system conditions professional assessments and centralizes control over complex police processes. This research therefore positions the algocracy in the heart of discussions about the future of the public sector and presents an agenda for further research.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133886874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A critical analysis of the study of gender and technology in government 对政府性别与技术研究的批判性分析
Pub Date : 2021-01-15 DOI: 10.2139/SSRN.3786174
Mary K. Feeney, Federica Fusi
Research at the intersection of feminist organizational theory and techno-science scholarship notes the importance of gender in technology design, adoption, implementation, and use within organizations and how technology in the workplace shapes and is shaped by gender. While governments are committed to advancing gender equity in the workplace, feminist theory is rarely applied to the analysis of the use, adoption, and implementation of technology in government settings from the perspective of public managers and employees. In this paper, we argue that e-government research and practice can benefit from drawing from three streams of feminist research: 1) studying gender as a social construct, 2) researching gender bias in data, technology use, and design, and 3) assessing gendered representation in technology management. Drawing from feminist research, we offer six propositions and several research questions for advancing research on e-government and gender in public sector workplaces.
女权主义组织理论和技术科学学术交叉的研究指出了性别在技术设计、采用、实施和组织内使用中的重要性,以及工作场所中的技术如何塑造和被性别塑造。虽然政府致力于促进工作场所的性别平等,但女性主义理论很少从公共管理者和雇员的角度来分析政府环境中技术的使用、采用和实施。在本文中,我们认为电子政务研究和实践可以借鉴女性主义研究的三个流派:1)研究性别作为一种社会建构,2)研究数据、技术使用和设计中的性别偏见,以及3)评估技术管理中的性别代表。在女性主义研究的基础上,我们提出了六个建议和几个研究问题,以推进公共部门工作场所的电子政务和性别研究。
{"title":"A critical analysis of the study of gender and technology in government","authors":"Mary K. Feeney, Federica Fusi","doi":"10.2139/SSRN.3786174","DOIUrl":"https://doi.org/10.2139/SSRN.3786174","url":null,"abstract":"Research at the intersection of feminist organizational theory and techno-science scholarship notes the importance of gender in technology design, adoption, implementation, and use within organizations and how technology in the workplace shapes and is shaped by gender. While governments are committed to advancing gender equity in the workplace, feminist theory is rarely applied to the analysis of the use, adoption, and implementation of technology in government settings from the perspective of public managers and employees. In this paper, we argue that e-government research and practice can benefit from drawing from three streams of feminist research: 1) studying gender as a social construct, 2) researching gender bias in data, technology use, and design, and 3) assessing gendered representation in technology management. Drawing from feminist research, we offer six propositions and several research questions for advancing research on e-government and gender in public sector workplaces.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128924542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Administration by algorithm: A risk management framework 算法管理:风险管理框架
Pub Date : 2020-12-04 DOI: 10.3233/ip-200249
F. Bannister, R. Connolly
Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.
算法决策既不是最近才出现的现象,也不一定与人工智能(AI)有关,尽管人工智能的进步正越来越多地导致迄今为止人类的决策被算法和机器学习等技术所取代,或变得依赖于这些技术。这些发展带来了许多潜在的好处,但也并非没有一定的风险。这些风险并不总是被很好地理解。这不仅仅是机器犯错的问题;在软件中嵌入的价值观、偏见和偏见会对社会中的个人和群体造成歧视。这种偏见往往很难发现或证明,特别是在透明度和问责制存在问题以及此类系统外包给私营部门的情况下。因此,能够发现和分类这些风险对于制定系统和校准的应对措施至关重要。本文提出了公共部门决策算法的简单分类,并利用该分类构建了一个风险管理框架,该框架包含若干组成部分,包括问责结构和监管治理。该框架旨在帮助有兴趣确保人工智能在公共领域的结构化问责制和法律监管的学者和从业者。
{"title":"Administration by algorithm: A risk management framework","authors":"F. Bannister, R. Connolly","doi":"10.3233/ip-200249","DOIUrl":"https://doi.org/10.3233/ip-200249","url":null,"abstract":"Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"C-19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
List of contributing reviewers 2020 投稿审稿人名单2020
Pub Date : 2020-12-04 DOI: 10.3233/ip-200011
{"title":"List of contributing reviewers 2020","authors":"","doi":"10.3233/ip-200011","DOIUrl":"https://doi.org/10.3233/ip-200011","url":null,"abstract":"","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123329164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence, bureaucratic form, and discretion in public service 人工智能、官僚形式和公共服务中的自由裁量权
Pub Date : 2020-12-04 DOI: 10.3233/ip-200223
Justin B. Bullock, Matthew M. Young, Yi-Fan Wang
This article examines the relationship between Artificial Intelligence (AI), discretion, and bureaucratic form in public organizations. We ask: How is the use of AI both changing and changed by the bureaucratic form of public organizations, and what effect does this have on the use of discretion? The diffusion of information and communication technologies (ICTs) has changed administrative behavior in public organizations. Recent advances in AI have led to its increasing use, but too little is known about the relationship between this distinct form of ICT and to both the exercise of discretion and bureaucratic form along the continuum from street- to system-levels. We articulate a theoretical framework that integrates work on the unique effects of AI on discretion and its relationship to task and organizational context with the theory of system-level bureaucracy. We use this framework to examine two strongly differing cases of public sector AI use: health insurance auditing, and policing. We find AI’s effect on discretion is nonlinear and nonmonotonic as a function of bureaucratic form. At the same time, the use of AI may act as an accelerant in transitioning organizations from street- and screen-level to system-level bureaucracies, even if these organizations previously resisted such changes.
本文探讨了公共组织中人工智能(AI)、自由裁量权和官僚形式之间的关系。我们的问题是:人工智能的使用是如何被公共组织的官僚形式所改变的,这对自由裁量权的使用有什么影响?信息通信技术(ict)的普及已经改变了公共组织的行政行为。人工智能的最新进展导致其使用越来越多,但人们对这种独特形式的信息和通信技术与从街头到系统一级的自由裁量权行使和官僚形式之间的关系知之甚少。我们阐述了一个理论框架,该框架将人工智能对自由裁量权的独特影响及其与任务和组织背景的关系与系统级官僚主义理论相结合。我们使用这个框架来研究公共部门使用人工智能的两个截然不同的案例:健康保险审计和警务。我们发现人工智能对自由裁量权的影响是非线性的、非单调的,是官僚形式的函数。与此同时,人工智能的使用可能会促进组织从街头和屏幕级过渡到系统级的官僚机构,即使这些组织以前抵制这种变化。
{"title":"Artificial intelligence, bureaucratic form, and discretion in public service","authors":"Justin B. Bullock, Matthew M. Young, Yi-Fan Wang","doi":"10.3233/ip-200223","DOIUrl":"https://doi.org/10.3233/ip-200223","url":null,"abstract":"This article examines the relationship between Artificial Intelligence (AI), discretion, and bureaucratic form in public organizations. We ask: How is the use of AI both changing and changed by the bureaucratic form of public organizations, and what effect does this have on the use of discretion? The diffusion of information and communication technologies (ICTs) has changed administrative behavior in public organizations. Recent advances in AI have led to its increasing use, but too little is known about the relationship between this distinct form of ICT and to both the exercise of discretion and bureaucratic form along the continuum from street- to system-levels. We articulate a theoretical framework that integrates work on the unique effects of AI on discretion and its relationship to task and organizational context with the theory of system-level bureaucracy. We use this framework to examine two strongly differing cases of public sector AI use: health insurance auditing, and policing. We find AI’s effect on discretion is nonlinear and nonmonotonic as a function of bureaucratic form. At the same time, the use of AI may act as an accelerant in transitioning organizations from street- and screen-level to system-level bureaucracies, even if these organizations previously resisted such changes.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133540272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Inf. Polity
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1