公共行政中的自动化偏见--从法律和心理学的跨学科视角出发

IF 7.8 1区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Government Information Quarterly Pub Date : 2024-06-22 DOI:10.1016/j.giq.2024.101953
Hannah Ruschemeier, Lukas J. Hondrich
{"title":"公共行政中的自动化偏见--从法律和心理学的跨学科视角出发","authors":"Hannah Ruschemeier,&nbsp;Lukas J. Hondrich","doi":"10.1016/j.giq.2024.101953","DOIUrl":null,"url":null,"abstract":"<div><p>The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"41 3","pages":"Article 101953"},"PeriodicalIF":7.8000,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0740624X24000455/pdfft?md5=f139afd2536a788af4e4774e64383581&pid=1-s2.0-S0740624X24000455-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Automation bias in public administration – an interdisciplinary perspective from law and psychology\",\"authors\":\"Hannah Ruschemeier,&nbsp;Lukas J. Hondrich\",\"doi\":\"10.1016/j.giq.2024.101953\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.</p></div>\",\"PeriodicalId\":48258,\"journal\":{\"name\":\"Government Information Quarterly\",\"volume\":\"41 3\",\"pages\":\"Article 101953\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2024-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000455/pdfft?md5=f139afd2536a788af4e4774e64383581&pid=1-s2.0-S0740624X24000455-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Government Information Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000455\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000455","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文旨在从心理学和规范学的角度,打破人们普遍认为的全自动决策与人工决策之间的对立,尤其是在法律领域。这一点尤为重要,因为人类监督被视为质量控制的有效手段,包括在当前的《人工智能法》中。自动化偏差现象与这一假设背道而驰。我们对自动化偏差现象进行了研究,将其视为规范性机构决策情况下的一种行为效应及其影响。自动化偏差现象,即个人过度依赖机器生成的决策或建议,具有深远的影响。过度依赖可能会导致无法有意义地参与手头的决策,从而无法发现自动化的失误,并导致决策质量的整体下降,甚至可能造成决策支持系统的净负面影响。由于法律体系强调人类决策在确保公平和质量方面的作用,本文批判性地审视了当前欧盟和国家法律框架在应对自动化偏差风险方面的不足。本文以新颖的视角,整合了心理、技术和规范要素,分析了自动化偏差及其法律影响。文章将人类决策与法律原则相结合,从规范的角度探讨了人工智能与人机交互之间的交叉点。对《人工智能法》的探讨揭示了潜在的改进途径。最后,我们的论文提出了四个步骤,旨在有效应对自动化偏见带来的潜在危险。通过将心理学见解、法律分析和技术含义联系起来,本文提倡在自动化日益发展的世界中,采用整体方法来发展法律框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Automation bias in public administration – an interdisciplinary perspective from law and psychology

The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Government Information Quarterly
Government Information Quarterly INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
15.70
自引率
16.70%
发文量
106
期刊介绍: Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.
期刊最新文献
A more secure framework for open government data sharing based on federated learning Does trust in government moderate the perception towards deepfakes? Comparative perspectives from Asia on the risks of AI and misinformation for democracy Open government data and self-efficacy: The empirical evidence of micro foundation via survey experiments Transforming towards inclusion-by-design: Information system design principles shaping data-driven financial inclusiveness Bridging the gap: Towards an expanded toolkit for AI-driven decision-making in the public sector
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1