首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Mapping (in)visibility and structural injustice in the digital space 数字空间中可见性和结构性不公正的映射
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2022.100024
Kebene Wodajo

This study aims to map digitally mediated injustice and to understand how judicial versus non-judicial bodies contextualize and translate such harm into human rights violations. This study surveys judicial and quasi-judicial cases and case reports by non-judicial bodies, mainly civil society organizations, international organizations, and media. It divides digitally mediated harms identified through the survey into three categories: direct, structural, and hybrid harm. It then examines how these three forms of harm are represented and articulated in judicial judgments and case reports. To differentiate between the three forms of digitally mediated harm, the study uses Iris Young's political philosophy of structural injustice and Johan Galtung's account of structural violence in peace studies. The focus of this study is understanding the forms of injustices that are present but rendered invisible because of how they are contextualized. Therefore, the epistemology of absence is applied as the theoretical approach, that is, interpretation of absence and invisibility. The epistemology of absence facilitates the identification of structural and intersectional injustices that are not articulated in the same way they are experienced in society. The assessment reveals four observations. (1) Structural injustice is rarely examined through a conventional adjudicatory process. (2) Harms of structural quality examined by courts are narrowly interpreted when translated into rights violations. (3) The right to privacy, often presented as a gateway right, addresses structural injustice only partially, as this right has a subject-centric narrow interpretation currently. (4) There are limitations to the mainstream way of seeing and representing risks and injustices in the digital space, and such a view yields metonymic reasoning when framing digitally produced harms. As a result, the conventional way of contextualization is blind to unconventional experiences of vulnerability, which renders structural and intersectional injustices experienced by marginalized communities invisible.

本研究旨在描绘数字媒介的不公正现象,并了解司法机构与非司法机构如何将这种伤害置于背景下并将其转化为侵犯人权的行为。本研究调查了司法和准司法案件以及非司法机构(主要是民间社会组织、国际组织和媒体)的案件报告。它将通过调查确定的数字媒介危害分为三类:直接危害、结构性危害和混合危害。然后,它审查了这三种形式的伤害是如何在司法判决和案件报告中表现和阐述的。为了区分数字媒介伤害的三种形式,该研究使用了Iris Young关于结构性不公正的政治哲学和Johan Galtung关于和平研究中的结构性暴力的描述。本研究的重点是理解存在的,但由于它们如何被语境化而变得无形的不公正形式。因此,运用缺席认识论作为理论途径,即对缺席和不可见的解释。缺失的认识论有助于识别结构性和交叉性的不公正,这些不公正与他们在社会中经历的方式不同。该评估揭示了四个观察结果。(1)结构性不公正很少通过传统的裁决程序进行审查。(2)法院审查的结构质量危害在翻译为侵犯权利时被狭隘地解释。(3)隐私权通常被认为是一项门户权利,它只能部分地解决结构性不公正问题,因为这项权利目前有一个以主体为中心的狭隘解释。(4)在数字空间中观察和表现风险和不公正的主流方式存在局限性,这种观点在构建数字产生的危害时产生了转喻推理。因此,传统的语境化方式对非常规的脆弱性经验视而不见,这使得边缘化社区所经历的结构性和交叉性不公正变得不可见。
{"title":"Mapping (in)visibility and structural injustice in the digital space","authors":"Kebene Wodajo","doi":"10.1016/j.jrt.2022.100024","DOIUrl":"10.1016/j.jrt.2022.100024","url":null,"abstract":"<div><p>This study aims to map digitally mediated injustice and to understand how judicial versus non-judicial bodies contextualize and translate such harm into human rights violations. This study surveys judicial and quasi-judicial cases and case reports by non-judicial bodies, mainly civil society organizations, international organizations, and media. It divides digitally mediated harms identified through the survey into three categories: direct, structural, and hybrid harm. It then examines how these three forms of harm are represented and articulated in judicial judgments and case reports. To differentiate between the three forms of digitally mediated harm, the study uses Iris Young's political philosophy of structural injustice and Johan Galtung's account of structural violence in peace studies. The focus of this study is understanding the forms of injustices that are present but rendered invisible because of how they are contextualized. Therefore, the epistemology of absence is applied as the theoretical approach, that is, interpretation of absence and invisibility. The epistemology of absence facilitates the identification of structural and intersectional injustices that are not articulated in the same way they are experienced in society. The assessment reveals four observations. (1) Structural injustice is rarely examined through a conventional adjudicatory process. (2) Harms of structural quality examined by courts are narrowly interpreted when translated into rights violations. (3) The right to privacy, often presented as a gateway right, addresses structural injustice only partially, as this right has a subject-centric narrow interpretation currently. (4) There are limitations to the mainstream way of seeing and representing risks and injustices in the digital space, and such a view yields metonymic reasoning when framing digitally produced harms. As a result, the conventional way of contextualization is blind to unconventional experiences of vulnerability, which renders structural and intersectional injustices experienced by marginalized communities invisible.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100024"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000014/pdfft?md5=0bb5425cc8a78f997b74b74b07ee267e&pid=1-s2.0-S2666659622000014-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48371316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Socio-economic impact assessments for new and emerging technologies 新技术和新兴技术的社会经济影响评估
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100019
Rowena Rodrigues, Marina Diez Rituerto
{"title":"Socio-economic impact assessments for new and emerging technologies","authors":"Rowena Rodrigues,&nbsp;Marina Diez Rituerto","doi":"10.1016/j.jrt.2021.100019","DOIUrl":"10.1016/j.jrt.2021.100019","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100019"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000123/pdfft?md5=f7fd8498ec8801ccd44a2cdc0e061ad9&pid=1-s2.0-S2666659621000123-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41760636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bias detection by using name disparity tables across protected groups 跨保护组使用名称差异表进行偏差检测
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100020
Elhanan Mishraky, Aviv Ben Arie, Yair Horesh, Shir Meir Lador

As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name disparity tables across protected groups, allowing us to estimate sensitive attributes (gender, race). Using these estimates, we compute bias metrics given a classification model’s predictions. We leverage only names/zip codes; hence, our method is model and feature agnostic. Secondly, we offer an open-source Python package that produces a bias detection report based on our method. Finally, we demonstrate that names of older individuals are better predictors of race and gender and that double surnames are a reasonable predictor of gender. We tested our method on publicly available datasets (US Congress) and classifiers (COMPAS) and found it to be consistent with them.

随着基于人工智能的模型在我们的生活中扮演越来越重要的角色,对公平的关注也越来越重要。近年来,越来越多的证据表明,人工智能模型是多么容易受到偏见的影响,以及在检测和缓解方面所面临的挑战。我们的贡献有三方面。首先,我们收集受保护群体的名字差异表,使我们能够估计敏感属性(性别、种族)。使用这些估计,我们计算偏差指标给定的分类模型的预测。我们只利用姓名/邮政编码;因此,我们的方法是模型和特征不可知的。其次,我们提供了一个开源的Python包,它可以根据我们的方法生成偏差检测报告。最后,我们证明,老年人的名字是更好的预测种族和性别,双姓是一个合理的预测性别。我们在公开可用的数据集(美国国会)和分类器(COMPAS)上测试了我们的方法,发现它与它们一致。
{"title":"Bias detection by using name disparity tables across protected groups","authors":"Elhanan Mishraky,&nbsp;Aviv Ben Arie,&nbsp;Yair Horesh,&nbsp;Shir Meir Lador","doi":"10.1016/j.jrt.2021.100020","DOIUrl":"10.1016/j.jrt.2021.100020","url":null,"abstract":"<div><p>As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name disparity tables across protected groups, allowing us to estimate sensitive attributes (gender, race). Using these estimates, we compute bias metrics given a classification model’s predictions. We leverage only names/zip codes; hence, our method is model and feature agnostic. Secondly, we offer an open-source Python package that produces a bias detection report based on our method. Finally, we demonstrate that names of older individuals are better predictors of race and gender and that double surnames are a reasonable predictor of gender. We tested our method on publicly available datasets (US Congress) and classifiers (COMPAS) and found it to be consistent with them.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100020"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000135/pdfft?md5=8041820faa51f0fd3959ba4a94d4edae&pid=1-s2.0-S2666659621000135-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45095460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Neo-Republican Critique of AI ethics 新共和主义对人工智能伦理的批判
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100022
Jonne Maas

The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).

旨在负责任地开发人工智能系统的人工智能伦理文献广泛同意这样一个事实,即社会迫切需要关于人工智能系统的有效问责机制。特别是,机器学习(ML)系统由于其不透明和自学习的特性而引起关注。然而,这种问责机制应该是什么样子,在很大程度上仍然是不明确的(例如,“利益相关者的投入”)或无效的(例如,“道德准则”)。在本文中,我认为制定和发展有效问责机制的困难部分在于主要关注密尔的伤害原则,该原则植根于不干涉自由的概念。对伤害的强烈关注掩盖了其他道德错误,例如塑造系统的人和受系统影响的人之间可能存在问题的权力动态。我认为,新共和主义的自由概念是非统治的,为负责任的机器学习开发提供了一个合适的框架。在新共和党人看来,统治是一种道德错误,因为它破坏了人类繁荣的潜力。为了减轻统治,新共和党人呼吁建立问责机制,最大限度地减少权力的任意关系。因此,新共和主义应该为负责任的ML开发提供信息,因为当问责机制有效时(即当它们不占主导地位时),它提供了实质性和具体的依据。
{"title":"A Neo-Republican Critique of AI ethics","authors":"Jonne Maas","doi":"10.1016/j.jrt.2021.100022","DOIUrl":"10.1016/j.jrt.2021.100022","url":null,"abstract":"<div><p>The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100022"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000159/pdfft?md5=7daecef4049ab13fc8e727405845c76d&pid=1-s2.0-S2666659621000159-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42936952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The role of empathy for artificial intelligence accountability 同理心在人工智能问责制中的作用
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100021
Ramya Srinivasan , Beatriz San Miguel González

Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of empathy can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.

问责制包含多个方面,例如责任、证明、报告、可追溯性、审计和纠正,以满足不同涉众(消费者、监管机构、开发人员等)的不同需求。为了考虑到不同利益相关者的需求,从而在人工智能中实施问责制,同理心的概念可能非常有效。同理心是一种基于理解他人的情感状态和意图、关心他们的感受和社会化而对他们的需求敏感的能力,它可以帮助解决与责任相关的社会技术挑战。本文的目的有两个。首先,我们从心理学、社会科学和组织科学等不同学科的发现中,阐明了共情和问责之间的联系。其次,我们提出了融入同理心的潜在途径。
{"title":"The role of empathy for artificial intelligence accountability","authors":"Ramya Srinivasan ,&nbsp;Beatriz San Miguel González","doi":"10.1016/j.jrt.2021.100021","DOIUrl":"10.1016/j.jrt.2021.100021","url":null,"abstract":"<div><p>Accountability encompasses multiple aspects such as responsibility, justification, reporting, traceability, audit, and redress so as to satisfy diverse requirements of different stakeholders— consumers, regulators, developers, etc. In order to take into account needs of different stakeholders and thus, to put into practice accountability in Artificial Intelligence, the notion of <em>empathy</em> can be quite effective. Empathy is the ability to be sensitive to the needs of someone based on understanding their affective states and intentions, caring for their feelings, and socialization, which can help in addressing the social-technical challenges associated with accountability. The goal of this paper is twofold. First, we elucidate the connections between empathy and accountability, drawing find- ings from various disciplines like psychology, social science, and organizational science. Second, we suggest potential pathways to incorporate empathy.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100021"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000147/pdfft?md5=d62d56f6632065dfd35eac30df62d0ad&pid=1-s2.0-S2666659621000147-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR 平台提供商在其生态系统中非法处理个人数据的责任——根据GDPR对Facebook和苹果iOS的社会技术法律分析
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2021.100018
Christian Kurtz , Florian Wittner , Martin Semmann , Wolfgang Schulz , Tilo Böhmann

Billions of people interact within platform-based ecosystems containing the personal data of their daily lives. Data which have become rigorously creatable, processable, and shareable. Here, platform providers facilitate interactions between three types of relevant actors: users, service providers, and third parties. Research in the information systems field has shown that platform providers influence their platform ecosystems to promote the contributions of service providers and exercise control by utilizing boundary resources. Through a socio-techno-legal analysis of two high-profile cases and their application on the General Data Protection Regulation (GDPR) we show that the boundary resource design, arrangement, and interplay can influence whether and to what extent platform providers are accountable for platform providers unlawful personal data processing in platform ecosystems. The findings can have a huge impact to account actors for personal data misusage in platform ecosystems and, thus, the protection of personal liberty and rights in such socio-technical systems.

数十亿人在包含他们日常生活个人数据的基于平台的生态系统中互动。具有严格的可创建、可处理和可共享性的数据。在这里,平台提供者促进了三类相关参与者之间的交互:用户、服务提供者和第三方。信息系统领域的研究表明,平台提供者通过影响其平台生态系统来促进服务提供者的贡献,并利用边界资源进行控制。通过对两个备受瞩目的案例及其在《通用数据保护条例》(GDPR)中的应用的社会技术法律分析,我们表明,边界资源的设计、安排和相互作用会影响平台提供商是否以及在多大程度上对平台生态系统中平台提供商非法处理个人数据负责。研究结果可能会对平台生态系统中个人数据滥用的行为者产生巨大影响,从而对此类社会技术系统中个人自由和权利的保护产生巨大影响。
{"title":"Accountability of platform providers for unlawful personal data processing in their ecosystems–A socio-techno-legal analysis of Facebook and Apple's iOS according to GDPR","authors":"Christian Kurtz ,&nbsp;Florian Wittner ,&nbsp;Martin Semmann ,&nbsp;Wolfgang Schulz ,&nbsp;Tilo Böhmann","doi":"10.1016/j.jrt.2021.100018","DOIUrl":"10.1016/j.jrt.2021.100018","url":null,"abstract":"<div><p>Billions of people interact within platform-based ecosystems containing the personal data of their daily lives. Data which have become rigorously creatable, processable, and shareable. Here, platform providers facilitate interactions between three types of relevant actors: users, service providers, and third parties. Research in the information systems field has shown that platform providers influence their platform ecosystems to promote the contributions of service providers and exercise control by utilizing boundary resources. Through a socio-techno-legal analysis of two high-profile cases and their application on the General Data Protection Regulation (GDPR) we show that the boundary resource design, arrangement, and interplay can influence whether and to what extent platform providers are accountable for platform providers unlawful personal data processing in platform ecosystems. The findings can have a huge impact to account actors for personal data misusage in platform ecosystems and, thus, the protection of personal liberty and rights in such socio-technical systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100018"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000111/pdfft?md5=973ab4afa4f2d1cc53f217345202fb68&pid=1-s2.0-S2666659621000111-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications 负责任地管理用于印度作物保险应用的民用无人机创新
Pub Date : 2022-04-01 DOI: 10.1016/j.jrt.2022.100025
Anjan Chamuah, Rajbeer Singh

Civilian Unmanned Aerial Vehicle (UAV) is an emerging technology in Indian crop insurance applications. The technology is new to an agro-based country like India with diverse socio-cultural norms and values. However, in such a diverse democracy, UAV governance and deployment pose a significant challenge and risk. In other words, charting out a proper framework for a risk-free implementation of this governance has emerged as a leading research topic in the concerned discipline. In innovations literature, Responsible Innovation (RI) takes care of emerging technology governance; thus, RI becomes significant as a theoretical framework. The study is intended to find out how the framework of RI enables responsible governance and also who are the main actors and stakeholders of governance and deployment of civilian UAVs in crop insurance applications in India? An in-depth interview method and snowball sampling technique have been employed to identify interviewees from Delhi, Gujarat, and Rajasthan. Findings suggest that civilian UAV is effective in handling risk, crop damage assessment, and claim settlement. The RI approach, through its dimensions and steps, enables equal participation and deliberation among all the actors and stakeholders of UAV governance that consists of government bodies, research organizations, insurance agencies, local administration, and farmers. Effective regulations, adhering to accountability, and responsibility promote responsible governance.

民用无人机(UAV)是印度农作物保险应用中的一项新兴技术。对于像印度这样一个社会文化规范和价值观各异的农业国家来说,这项技术是一项新技术。然而,在这样一个多元化的民主国家,无人机的治理和部署构成了重大的挑战和风险。换句话说,为这种治理的无风险实现制定一个适当的框架已经成为相关学科的主要研究课题。在创新文献中,负责任创新(RI)关注新兴技术治理;因此,国际扶轮成为一个重要的理论框架。该研究旨在找出RI框架如何实现负责任的治理,以及谁是印度农作物保险应用中民用无人机治理和部署的主要参与者和利益相关者?采用深度访谈法和滚雪球抽样技术来确定来自德里,古吉拉特邦和拉贾斯坦邦的受访者。研究结果表明,民用无人机在处理风险、作物损害评估和索赔解决方面是有效的。RI方法通过其维度和步骤,使无人机治理的所有参与者和利益相关者(包括政府机构、研究组织、保险机构、地方行政部门和农民)能够平等参与和审议。有效的法规,坚持问责制,责任促进负责任的治理。
{"title":"Responsible governance of civilian unmanned aerial vehicle (UAV) innovations for Indian crop insurance applications","authors":"Anjan Chamuah,&nbsp;Rajbeer Singh","doi":"10.1016/j.jrt.2022.100025","DOIUrl":"10.1016/j.jrt.2022.100025","url":null,"abstract":"<div><p>Civilian Unmanned Aerial Vehicle (UAV) is an emerging technology in Indian crop insurance applications. The technology is new to an agro-based country like India with diverse socio-cultural norms and values. However, in such a diverse democracy, UAV governance and deployment pose a significant challenge and risk. In other words, charting out a proper framework for a risk-free implementation of this governance has emerged as a leading research topic in the concerned discipline. In innovations literature, Responsible Innovation (RI) takes care of emerging technology governance; thus, RI becomes significant as a theoretical framework. The study is intended to find out <strong>how the framework of RI enables responsible governance and also who are the main actors and stakeholders of governance and deployment of civilian UAVs in crop insurance applications in India</strong>? An in-depth interview method and snowball sampling technique have been employed to identify interviewees from Delhi, Gujarat, and Rajasthan. Findings suggest that civilian UAV is effective in handling risk, crop damage assessment, and claim settlement. The RI approach, through its dimensions and steps, enables equal participation and deliberation among all the actors and stakeholders of UAV governance that consists of government bodies, research organizations, insurance agencies, local administration, and farmers. Effective regulations, adhering to accountability, and responsibility promote responsible governance.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"9 ","pages":"Article 100025"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000026/pdfft?md5=6fcb8e9ad2745a0da20c9119b0d88eeb&pid=1-s2.0-S2666659622000026-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48673717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction 决策点困境:人类与人工智能交互中的另一个责任问题
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100013
Laura Crompton

AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.

人工智能作为决策支持据说可以帮助人类代理人更有效地做出“更好”的决策。然而,研究表明,它有时会极大地影响人类用户的决定。虽然对人工智能的预期影响已经有了相当多的研究,但在理论和实践研究中,关于人工智能的意外影响似乎存在很大的差距。在本文中,我的目标是解决其中的一些差距,并希望对意外的人工智能影响所产生的伦理和道德问题有所了解。我认为,意想不到的人工智能影响对我们感知和评估人类与人工智能互动的方式具有重要意义。为了使这一点从理论和实践两方面都接近,并避免人类中心主义的模糊性,我引入了决策点的概念。基于此,本文的主要论点将在两个连续的步骤中提出:i)意外的AI影响不允许适当的决策点确定-这将被引入决策点困境,ii)这对责任的归属具有重要意义。
{"title":"The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction","authors":"Laura Crompton","doi":"10.1016/j.jrt.2021.100013","DOIUrl":"10.1016/j.jrt.2021.100013","url":null,"abstract":"<div><p>AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100013"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000068/pdfft?md5=e8634dde79377a2caf85de3bcbdd39b1&pid=1-s2.0-S2666659621000068-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48110664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable 专家可解释性:使支持专家决策的算法更具可解释性的设计框架
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100017
Auste Simkute , Ewa Luger , Bronwyn Jones , Michael Evans , Rhianne Jones

Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.

算法决策支持系统广泛应用于从医疗保健到新闻等领域。为了确保这些系统是公平和负责任的,人类必须保持有意义的代理,理解和监督算法过程。可解释性通常被视为一种有希望的机制,使人在循环中,然而,目前的方法是无效的,并可能导致各种偏见。我们认为,可解释性应该被量身定制,以支持领域专家和新手采用的自然主义决策和意义制定策略。在认知心理学和人因文献综述的基础上,我们绘制了依赖于专业知识、风险和时间动态的潜在决策策略,并提出了概念性的专业知识、风险和时间可解释性框架,旨在作为可解释性设计指南。最后,我们给出了一个新闻工作的例子来说明我们的框架在实践中的适用性。
{"title":"Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable","authors":"Auste Simkute ,&nbsp;Ewa Luger ,&nbsp;Bronwyn Jones ,&nbsp;Michael Evans ,&nbsp;Rhianne Jones","doi":"10.1016/j.jrt.2021.100017","DOIUrl":"10.1016/j.jrt.2021.100017","url":null,"abstract":"<div><p>Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100017"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100010X/pdfft?md5=209e9bba6d0a6ab1de48f2f469aae35b&pid=1-s2.0-S266665962100010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42673610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
“Computer says no”: Algorithmic decision support and organisational responsibility “计算机拒绝”:算法决策支持和组织责任
Pub Date : 2021-10-01 DOI: 10.1016/j.jrt.2021.100014
Angelika Adensamer , Rita Gsenger , Lukas Daniel Klausner

Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on organisational responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.

算法决策支持越来越多地应用于社会各个领域的一系列不同背景和结构中,影响着许多人的生活。它的使用引发了关于问责制、透明度和责任等方面的问题。虽然对算法系统和责任问题进行了大量研究,但之前很少或根本没有对组织责任及其归属进行研究。我们的文章旨在填补这一空白;我们简要概述了与ADS、组织环境中的责任和决策相关的核心问题,并确定了悬而未决的问题和研究空白。此外,我们描述了一套指导方针和一个互补的数字工具,以帮助从业者在其组织环境中引入ADS时映射责任。
{"title":"“Computer says no”: Algorithmic decision support and organisational responsibility","authors":"Angelika Adensamer ,&nbsp;Rita Gsenger ,&nbsp;Lukas Daniel Klausner","doi":"10.1016/j.jrt.2021.100014","DOIUrl":"https://doi.org/10.1016/j.jrt.2021.100014","url":null,"abstract":"<div><p>Algorithmic decision support is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people’s lives. Its use raises questions, among others, about accountability, transparency and responsibility. While there is substantial research on the issue of algorithmic systems and responsibility in general, there is little to no prior research on <em>organisational</em> responsibility and its attribution. Our article aims to fill that gap; we give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"7 ","pages":"Article 100014"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962100007X/pdfft?md5=66c50c16e31d2aebf63b1f07b3c84789&pid=1-s2.0-S266665962100007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72106858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1