首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
A Neo-Republican Critique of AI ethics 新共和主义对人工智能伦理的批判
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100022
Jonne Maas

The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).

旨在负责任地开发人工智能系统的人工智能伦理文献广泛同意这样一个事实,即社会迫切需要关于人工智能系统的有效问责机制。特别是,机器学习(ML)系统由于其不透明和自学习的特性而引起关注。然而,这种问责机制应该是什么样子,在很大程度上仍然是不明确的(例如,“利益相关者的投入”)或无效的(例如,“道德准则”)。在本文中,我认为制定和发展有效问责机制的困难部分在于主要关注密尔的伤害原则,该原则植根于不干涉自由的概念。对伤害的强烈关注掩盖了其他道德错误,例如塑造系统的人和受系统影响的人之间可能存在问题的权力动态。我认为,新共和主义的自由概念是非统治的,为负责任的机器学习开发提供了一个合适的框架。在新共和党人看来,统治是一种道德错误,因为它破坏了人类繁荣的潜力。为了减轻统治,新共和党人呼吁建立问责机制,最大限度地减少权力的任意关系。因此,新共和主义应该为负责任的ML开发提供信息,因为当问责机制有效时(即当它们不占主导地位时),它提供了实质性和具体的依据。
{"title":"A Neo-Republican Critique of AI ethics","authors":"Jonne Maas","doi":"10.1016/j.jrt.2021.100022","DOIUrl":"10.1016/j.jrt.2021.100022","url":null,"abstract":"<div><p>The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000159/pdfft?md5=7daecef4049ab13fc8e727405845c76d&pid=1-s2.0-S2666659621000159-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42936952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The role of empathy for artificial intelligence accountability 同理心在人工智能问责制中的作用
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100021
Ramya Srinivasan , Beatriz San Miguel González

Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of empathy can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.

问责制包含多个方面,例如责任、证明、报告、可追溯性、审计和纠正,以满足不同涉众(消费者、监管机构、开发人员等)的不同需求。为了考虑到不同利益相关者的需求,从而在人工智能中实施问责制,同理心的概念可能非常有效。同理心是一种基于理解他人的情感状态和意图、关心他们的感受和社会化而对他们的需求敏感的能力,它可以帮助解决与责任相关的社会技术挑战。本文的目的有两个。首先,我们从心理学、社会科学和组织科学等不同学科的发现中,阐明了共情和问责之间的联系。其次,我们提出了融入同理心的潜在途径。
{"title":"The role of empathy for artificial intelligence accountability","authors":"Ramya Srinivasan ,&nbsp;Beatriz San Miguel González","doi":"10.1016/j.jrt.2021.100021","DOIUrl":"10.1016/j.jrt.2021.100021","url":null,"abstract":"<div><p>Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of <em>empathy</em> can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000147/pdfft?md5=d62d56f6632065dfd35eac30df62d0ad&pid=1-s2.0-S2666659621000147-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR 平台提供商在其生态系统中非法处理个人数据的责任——根据GDPR对Facebook和苹果iOS的社会技术法律分析
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100018
Christian Kurtz , Florian Wittner , Martin Semmann , Wolfgang Schulz , Tilo Böhmann

Billions of people interact within platform-based ecosystems containing the personal data of their daily lives. Data which have become rigorously creatable, processable, and shareable. Here, platform providers facilitate interactions between three types of relevant actors: users, service providers, and third parties. Research in the information systems field has shown that platform providers influence their platform ecosystems to promote the contributions of service providers and exercise control by utilizing boundary resources. Through a socio-techno-legal analysis of two high-profile cases and their application on the General Data Protection Regulation (GDPR) we show that the boundary resource design, arrangement, and interplay can influence whether and to what extent platform providers are accountable for platform providers unlawful personal data processing in platform ecosystems. The findings can have a huge impact to account actors for personal data misusage in platform ecosystems and, thus, the protection of personal liberty and rights in such socio-technical systems.

数十亿人在包含他们日常生活个人数据的基于平台的生态系统中互动。具有严格的可创建、可处理和可共享性的数据。在这里,平台提供者促进了三类相关参与者之间的交互:用户、服务提供者和第三方。信息系统领域的研究表明,平台提供者通过影响其平台生态系统来促进服务提供者的贡献,并利用边界资源进行控制。通过对两个备受瞩目的案例及其在《通用数据保护条例》(GDPR)中的应用的社会技术法律分析,我们表明,边界资源的设计、安排和相互作用会影响平台提供商是否以及在多大程度上对平台生态系统中平台提供商非法处理个人数据负责。研究结果可能会对平台生态系统中个人数据滥用的行为者产生巨大影响,从而对此类社会技术系统中个人自由和权利的保护产生巨大影响。
{"title":"Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR","authors":"Christian Kurtz ,&nbsp;Florian Wittner ,&nbsp;Martin Semmann ,&nbsp;Wolfgang Schulz ,&nbsp;Tilo Böhmann","doi":"10.1016/j.jrt.2021.100018","DOIUrl":"10.1016/j.jrt.2021.100018","url":null,"abstract":"<div><p>Billions of people interact within platform-based ecosystems containing the personal data of their daily lives. Data which have become rigorously creatable, processable, and shareable. Here, platform providers facilitate interactions between three types of relevant actors: users, service providers, and third parties. Research in the information systems field has shown that platform providers influence their platform ecosystems to promote the contributions of service providers and exercise control by utilizing boundary resources. Through a socio-techno-legal analysis of two high-profile cases and their application on the General Data Protection Regulation (GDPR) we show that the boundary resource design, arrangement, and interplay can influence whether and to what extent platform providers are accountable for platform providers unlawful personal data processing in platform ecosystems. The findings can have a huge impact to account actors for personal data misusage in platform ecosystems and, thus, the protection of personal liberty and rights in such socio-technical systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000111/pdfft?md5=973ab4afa4f2d1cc53f217345202fb68&pid=1-s2.0-S2666659621000111-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications 负责任地管理用于印度作物保险应用的民用无人机创新
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2022.100025
Anjan Chamuah, Rajbeer Singh

Civilian Unmanned Aerial Vehicle (UAV) is an emerging technology in Indian crop insurance applications. The technology is new to an agro-based country like India with diverse socio-cultural norms and values. However, in such a diverse democracy, UAV governance and deployment pose a significant challenge and risk. In other words, charting out a proper framework for a risk-free implementation of this governance has emerged as a leading research topic in the concerned discipline. In innovations literature, Responsible Innovation (RI) takes care of emerging technology governance; thus, RI becomes significant as a theoretical framework. The study is intended to find out how the framework of RI enables responsible governance and also who are the main actors and stakeholders of governance and deployment of civilian UAVs in crop insurance applications in India? An in-depth interview method and snowball sampling technique have been employed to identify interviewees from Delhi, Gujarat, and Rajasthan. Findings suggest that civilian UAV is effective in handling risk, crop damage assessment, and claim settlement. The RI approach, through its dimensions and steps, enables equal participation and deliberation among all the actors and stakeholders of UAV governance that consists of government bodies, research organizations, insurance agencies, local administration, and farmers. Effective regulations, adhering to accountability, and responsibility promote responsible governance.

民用无人机(UAV)是印度农作物保险应用中的一项新兴技术。对于像印度这样一个社会文化规范和价值观各异的农业国家来说,这项技术是一项新技术。然而,在这样一个多元化的民主国家,无人机的治理和部署构成了重大的挑战和风险。换句话说,为这种治理的无风险实现制定一个适当的框架已经成为相关学科的主要研究课题。在创新文献中,负责任创新(RI)关注新兴技术治理;因此,国际扶轮成为一个重要的理论框架。该研究旨在找出RI框架如何实现负责任的治理,以及谁是印度农作物保险应用中民用无人机治理和部署的主要参与者和利益相关者?采用深度访谈法和滚雪球抽样技术来确定来自德里,古吉拉特邦和拉贾斯坦邦的受访者。研究结果表明,民用无人机在处理风险、作物损害评估和索赔解决方面是有效的。RI方法通过其维度和步骤,使无人机治理的所有参与者和利益相关者(包括政府机构、研究组织、保险机构、地方行政部门和农民)能够平等参与和审议。有效的法规,坚持问责制,责任促进负责任的治理。
{"title":"Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications","authors":"Anjan Chamuah,&nbsp;Rajbeer Singh","doi":"10.1016/j.jrt.2022.100025","DOIUrl":"10.1016/j.jrt.2022.100025","url":null,"abstract":"<div><p>Civilian Unmanned Aerial Vehicle (UAV) is an emerging technology in Indian crop insurance applications. The technology is new to an agro-based country like India with diverse socio-cultural norms and values. However, in such a diverse democracy, UAV governance and deployment pose a significant challenge and risk. In other words, charting out a proper framework for a risk-free implementation of this governance has emerged as a leading research topic in the concerned discipline. In innovations literature, Responsible Innovation (RI) takes care of emerging technology governance; thus, RI becomes significant as a theoretical framework. The study is intended to find out <strong>how the framework of RI enables responsible governance and also who are the main actors and stakeholders of governance and deployment of civilian UAVs in crop insurance applications in India</strong>? An in-depth interview method and snowball sampling technique have been employed to identify interviewees from Delhi, Gujarat, and Rajasthan. Findings suggest that civilian UAV is effective in handling risk, crop damage assessment, and claim settlement. The RI approach, through its dimensions and steps, enables equal participation and deliberation among all the actors and stakeholders of UAV governance that consists of government bodies, research organizations, insurance agencies, local administration, and farmers. Effective regulations, adhering to accountability, and responsibility promote responsible governance.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000026/pdfft?md5=6fcb8e9ad2745a0da20c9119b0d88eeb&pid=1-s2.0-S2666659622000026-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48673717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction 决策点困境:人类与人工智能交互中的另一个责任问题
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100013
Laura Crompton

AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.

人工智能作为决策支持据说可以帮助人类代理人更有效地做出“更好”的决策。然而,研究表明,它有时会极大地影响人类用户的决定。虽然对人工智能的预期影响已经有了相当多的研究,但在理论和实践研究中,关于人工智能的意外影响似乎存在很大的差距。在本文中,我的目标是解决其中的一些差距,并希望对意外的人工智能影响所产生的伦理和道德问题有所了解。我认为,意想不到的人工智能影响对我们感知和评估人类与人工智能互动的方式具有重要意义。为了使这一点从理论和实践两方面都接近,并避免人类中心主义的模糊性,我引入了决策点的概念。基于此,本文的主要论点将在两个连续的步骤中提出:i)意外的AI影响不允许适当的决策点确定-这将被引入决策点困境,ii)这对责任的归属具有重要意义。
{"title":"The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction","authors":"Laura Crompton","doi":"10.1016/j.jrt.2021.100013","DOIUrl":"10.1016/j.jrt.2021.100013","url":null,"abstract":"<div><p>AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000068/pdfft?md5=e8634dde79377a2caf85de3bcbdd39b1&pid=1-s2.0-S2666659621000068-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48110664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable 专家可解释性:使支持专家决策的算法更具可解释性的设计框架
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100017
Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones

Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.

算法决策支持系统广泛应用于从医疗保健到新闻等领域。为了确保这些系统是公平和负责任的,人类必须保持有意义的代理,理解和监督算法过程。可解释性通常被视为一种有希望的机制,使人在循环中,然而,目前的方法是无效的,并可能导致各种偏见。我们认为,可解释性应该被量身定制,以支持领域专家和新手采用的自然主义决策和意义制定策略。在认知心理学和人因文献综述的基础上,我们绘制了依赖于专业知识、风险和时间动态的潜在决策策略,并提出了概念性的专业知识、风险和时间可解释性框架,旨在作为可解释性设计指南。最后,我们给出了一个新闻工作的例子来说明我们的框架在实践中的适用性。
{"title":"Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable","authors":"Auste Simkute ,&nbsp;Ewa Luger ,&nbsp;Bronwyn Jones ,&nbsp;Michael Evans ,&nbsp;Rhianne Jones","doi":"10.1016/j.jrt.2021.100017","DOIUrl":"10.1016/j.jrt.2021.100017","url":null,"abstract":"<div><p>Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100010X/pdfft?md5=209e9bba6d0a6ab1de48f2f469aae35b&pid=1-s2.0-S266665962100010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42673610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
“Computer says no”: Algorithmic decision support and organisational responsibility “计算机拒绝”:算法决策支持和组织责任
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100014
Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner

Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on organisational responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.

算法决策支持越来越多地应用于社会各个领域的一系列不同背景和结构中,影响着许多人的生活。它的使用引发了关于问责制、透明度和责任等方面的问题。虽然对算法系统和责任问题进行了大量研究,但之前很少或根本没有对组织责任及其归属进行研究。我们的文章旨在填补这一空白;我们简要概述了与ADS、组织环境中的责任和决策相关的核心问题,并确定了悬而未决的问题和研究空白。此外,我们描述了一套指导方针和一个互补的数字工具,以帮助从业者在其组织环境中引入ADS时映射责任。
{"title":"“Computer says no”: Algorithmic decision support and organisational responsibility","authors":"Angelika Adensamer ,&nbsp;Rita Gsenger ,&nbsp;Lukas Daniel Klausner","doi":"10.1016/j.jrt.2021.100014","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100014","url":null,"abstract":"<div><p>Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on <em>organisational</em> responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100007X/pdfft?md5=66c50c16e31d2aebf63b1f07b3c84789&pid=1-s2.0-S266665962100007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72106858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency 论坛的机构:通过机构视角进行算法问责的机制
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100015
Florian Cech

The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A3 framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.

设计旨在改进算法问责制的问责制措施的邪恶挑战需要以人为中心的方法。基于问责制最常见的定义之一,即行为者和论坛之间的关系,本文以行为者和论坛机构的形式提出了一个分析视角,通过该视角可以分析问责制过程。两个案例研究-奥地利公共就业服务局的AMAS系统和EnerCoach能源会计系统,作为基于利益相关者代理的问责制分析的例子。通过对这两个系统的比较,提出并讨论了算法问责机构框架(A3框架),旨在支持在问责过程的四个步骤中分析和改进机构。
{"title":"The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency","authors":"Florian Cech","doi":"10.1016/j.jrt.2021.100015","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100015","url":null,"abstract":"<div><p>The wicked challenge of designing accountability measures aimed at improving algorithmic accountability demands human-centered approaches. Based on one of the most common definitions of accountability as the relationship between an actor and a forum, this article presents an analytic lens in the form of actor and forum agency, through which the accountability process can be analysed. Two case studies - the Austrian Public Employment Service’s AMAS system and the EnerCoach energy accounting system, serve as examples to an analysis of accountability based on the agency of the stakeholders. Developed through the comparison of the two systems, the Algorithmic Accountability Agency Framework (A<sup>3</sup> framework) aimed at supporting the analysis and the improvement of agency throughout the four steps of the accountability process is presented and discussed.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000081/pdfft?md5=0c4add516911afa8f58f6e10d59434da&pid=1-s2.0-S2666659621000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72107121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Causality-based accountability mechanisms for socio-technical systems 基于因果关系的社会技术系统问责机制
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100016
Amjad Ibrahim, Stavros Kyriakopoulos, Alexander Pretschner

With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.

随着社会技术系统迅速部署到日常生活的各个方面,我们需要为它们的失败做好准备。指定这些系统的所有合法相互作用本质上是不切实际的,反过来,在设计时不能排除无效相互作用的可能性。由于现代系统可能会伤害到人,或者在失败时损害资产,它们应该承担责任。问责制是一个跨学科的概念,不能简单地描述为一个系统的整体技术属性。因此,在本文中,我们提出了一种自下而上的方法,使用特定目标的问责机制来实现问责。每种机制都提供了鉴定功能,帮助我们识别特定类型事件的根本原因,从而消除潜在的(技术)问题并确定责任。本文介绍了设计和建立问责机制所需的不同成分,并重点介绍了因果关系理论的技术和实际应用,作为实现我们目标的基石。据我们所知,文献缺乏一种系统的方法来设想、设计和实现促进系统问责制的能力。通过一个来自基于微服务的系统领域的案例研究,我们认为这是现代复杂系统的代表,我们从整体上证明了该方法的有效性。我们表明,它是通用的,足以适应不同的问责目标和机制。
{"title":"Causality-based accountability mechanisms for socio-technical systems","authors":"Amjad Ibrahim,&nbsp;Stavros Kyriakopoulos,&nbsp;Alexander Pretschner","doi":"10.1016/j.jrt.2021.100016","DOIUrl":"10.1016/j.jrt.2021.100016","url":null,"abstract":"<div><p>With the rapid deployment of socio-technical systems into all aspects of daily life, we need to be prepared for their failures. It is inherently impractical to specify all the lawful interactions of these systems, in turn, the possibility of invalid interactions cannot be excluded at design time. As modern systems might harm people, or compromise assets if they fail, they ought to be accountable. Accountability is an interdisciplinary concept that cannot be easily described as a holistic technical property of a system. Thus, in this paper, we propose a bottom-up approach to enable accountability using goal-specific accountability mechanisms. Each mechanism provides forensic capabilities that help us to identify the root cause for a specific type of events, both to eliminate the underlying (technical) problem and to assign blame. This paper presents the different ingredients that are required to design and build an accountability mechanism and focuses on the technical and practical utilization of causality theories as a cornerstone to achieve our goal. To the best of our knowledge, the literature lacks a systematic methodology to envision, design, and implement abilities that promote accountability in systems. With a case study from the area of microservice-based systems, which we deem representative of modern complex systems, we demonstrate the effectiveness of the approach as a whole. We show that it is generic enough to accommodate different accountability goals and mechanisms.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000093/pdfft?md5=70a06e5c6bb7727c37ce86ad9a9191e0&pid=1-s2.0-S2666659621000093-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46987346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Role of Engineers in Harmonising Human Values for AI Systems Design 工程师在人工智能系统设计中协调人类价值观的作用
Pub Date : 2021-09-13 DOI: 10.21203/rs.3.rs-709596/v1
Steven Umbrello
Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.
大多数工程师在社会结构中工作,并受一套主要强调经济问题的价值观所支配。大多数的创新来源于这些基因座。鉴于这些创新对不同社区的影响,它们所体现的价值观必须与这些社会保持一致。与其他变革性技术一样,人工智能系统可以由一个组织设计,但可以在全球范围内扩散,并随着时间的推移显示出影响。本文认为,为了为这个广泛的利益相关者群体进行设计,工程师必须采用一种系统思维方法,使他们能够理解跨社会文化领域的人工智能系统的社会技术性。它声称,价值敏感设计,尤其是想象卡片,为帮助设计师协调人类价值(跨越时空界限的理解)与经济价值,而不是前者以后者的机会成本为代价,迈出了坚实的第一步。
{"title":"The Role of Engineers in Harmonising Human Values for AI Systems Design","authors":"Steven Umbrello","doi":"10.21203/rs.3.rs-709596/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-709596/v1","url":null,"abstract":"\u0000 Most engineers work within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41705814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1