首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Artificial worlds and artificial minds: Authenticity and language learning in digital lifeworlds 人工世界和人工思维:数字生活世界中的真实性和语言学习
Pub Date : 2025-07-29 DOI: 10.1016/j.jrt.2025.100131
Blair Matthews
Language learning is increasingly being extended into digital and online spaces that have been enhanced by simulated reality and augmented with data and artificial intelligence. While this may expand opportunities for language learning, some critics argue that digital spaces may represent a pastiche or a parody of reality. However, while there are genuine issues, such criticisms may often fall back on naïve or essentialist views of authenticity, in particular by narrowing language learning scenarios to real-life or genuine communication. I argue that research undersocialises authenticity by not taking social relations into sufficient consideration, which denies or elides the ways that authenticity is achieved. In this conceptual paper, I offer a relational account of authenticity, where I conceive digital environments within a stratified ontological framework, where authenticity is not inherent in individuals or texts, but instead emerges from complex social contexts. Authenticity, then, does not refer to authenticity of texts or “being oneself”, but authenticity in relation to others. A stratified ontology provides opportunities to extend relations with others, offering what is described as a “submersion into a temporary agency”, where language learners can experiment with the social order in order to achieve authenticity of themselves in the target language. Finally, I present a relational pedagogy based on responsiveness, where feedback is distributed among disparate human and technical actors which facilitate, problematise or endorse authenticity.
语言学习越来越多地扩展到数字和在线空间,这些空间通过模拟现实得到增强,并通过数据和人工智能得到增强。虽然这可能会扩大语言学习的机会,但一些批评者认为,数字空间可能是对现实的模仿或模仿。然而,虽然存在真正的问题,但这些批评可能往往会依赖于naïve或真实性的本质主义观点,特别是将语言学习场景缩小到现实生活或真实的交流中。我认为,研究没有充分考虑社会关系,从而低估了真实性,否认或省略了实现真实性的途径。在这篇概念性论文中,我提供了一个关于真实性的关系描述,我在一个分层的本体论框架内构想数字环境,其中真实性不是个人或文本固有的,而是从复杂的社会背景中产生的。因此,真实性不是指文本的真实性或“做自己”,而是指与他人关系的真实性。分层的本体论提供了扩展与他人关系的机会,提供了所谓的“沉浸在临时代理中”,语言学习者可以在其中实验社会秩序,以实现自己在目标语言中的真实性。最后,我提出了一种基于响应性的关系教学法,其中反馈分布在不同的人和技术参与者之间,这些参与者促进了真实性,提出了问题或认可了真实性。
{"title":"Artificial worlds and artificial minds: Authenticity and language learning in digital lifeworlds","authors":"Blair Matthews","doi":"10.1016/j.jrt.2025.100131","DOIUrl":"10.1016/j.jrt.2025.100131","url":null,"abstract":"<div><div>Language learning is increasingly being extended into digital and online spaces that have been enhanced by simulated reality and augmented with data and artificial intelligence. While this may expand opportunities for language learning, some critics argue that digital spaces may represent a pastiche or a parody of reality. However, while there are genuine issues, such criticisms may often fall back on naïve or essentialist views of authenticity, in particular by narrowing language learning scenarios to real-life or genuine communication. I argue that research undersocialises authenticity by not taking social relations into sufficient consideration, which denies or elides the ways that authenticity is achieved. In this conceptual paper, I offer a relational account of authenticity, where I conceive digital environments within a stratified ontological framework, where authenticity is not inherent in individuals or texts, but instead emerges from complex social contexts. Authenticity, then, does not refer to authenticity of texts or “being oneself”, but authenticity in relation to others. A stratified ontology provides opportunities to extend relations with others, offering what is described as a “submersion into a temporary agency”, where language learners can experiment with the social order in order to achieve authenticity of themselves in the target language. Finally, I present a relational pedagogy based on responsiveness, where feedback is distributed among disparate human and technical actors which facilitate, problematise or endorse authenticity.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144757674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a responsible and ethical authorization to operate: A case study in AI consulting 走向负责任和道德的运营授权:人工智能咨询的案例研究
Pub Date : 2025-07-24 DOI: 10.1016/j.jrt.2025.100130
Jason M. Pittman , Geoff Schaefer
The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.
美国联邦政府规定所有技术都要获得操作授权(ATO)。ATO是该技术安全合规的证明。这个过程强调了一个基本信念:技术必须符合已建立的安全规范。然而,以安全为中心的观点不包括道德和负责任的人工智能。与安全参数不同,道德和负责任的人工智能缺乏标准化的评估框架。这给人工智能治理留下了一个关键的空白。本文介绍了我们在解决这一差距方面的咨询经验,并介绍了一种开创性的ATO评估工具。该工具将道德和负责任的人工智能原则纳入评估决策。我们深入研究了仪器的设计,揭示了独特的属性和功能。此外,我们还讨论了与该ATO工具相关的紧急最佳实践。这些包括从业者和决策者都感兴趣的潜在决策陷阱。展望未来,我们设想这个道德和负责任的ATO的发展版本。这种未来的迭代结合了持续的监视能力和新的道德度量。最后,我们为人工智能社区提供了评估其人工智能决策的见解。
{"title":"Toward a responsible and ethical authorization to operate: A case study in AI consulting","authors":"Jason M. Pittman ,&nbsp;Geoff Schaefer","doi":"10.1016/j.jrt.2025.100130","DOIUrl":"10.1016/j.jrt.2025.100130","url":null,"abstract":"<div><div>The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144721687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unravelling responsibility for AI 解开人工智能的责任
Pub Date : 2025-07-23 DOI: 10.1016/j.jrt.2025.100124
Zoe Porter , Philippa Ryan , Phillip Morgan , Joanna Al-Qaddoumi , Bernard Twomey , Paul Noordhof , John McDermid , Ibrahim Habli
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, senses in which they are responsible, and aspects of events they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.
人们普遍认为,我们需要确定谁对人工智能系统的产出和影响负责。这对于实现人工智能伤害受害者的正义和赔偿,以及为政策和工程实践提供信息非常重要。但是,如果对“责任”的含义没有一个清晰、透彻的理解,那么对责任所在的思考,往好了说,就是没有重点、不完整,往坏了说,就是被误导了。此外,人工智能支持的系统存在于更广泛的参与者、决策和治理结构生态系统中,从而产生复杂的责任关系网络。为了解决这些问题,本文提出了一个责任的概念框架,附带了一个图形符号和通用方法,用于可视化这些责任网络,并用于追踪人工智能的不同责任归因。采用“行动者A对事件O负责”的三部分公式,该框架揭示了责任的概念,以澄清谁对人工智能负责、他们负责的感觉以及他们负责的事件方面存在不同的可能性。符号允许用图形表示这些排列。该方法使用户能够将框架应用于特定的场景。其目的是为来自不同学科背景的利益相关者提供一个基础,以讨论和解决涉及人工智能的假设和现实案例中的复杂责任问题。这项工作通过应用于一个虚构的场景来说明,即一艘无人驾驶的人工智能海上船只在自主模式下与一艘传统的海上有船员的船只发生致命碰撞。
{"title":"Unravelling responsibility for AI","authors":"Zoe Porter ,&nbsp;Philippa Ryan ,&nbsp;Phillip Morgan ,&nbsp;Joanna Al-Qaddoumi ,&nbsp;Bernard Twomey ,&nbsp;Paul Noordhof ,&nbsp;John McDermid ,&nbsp;Ibrahim Habli","doi":"10.1016/j.jrt.2025.100124","DOIUrl":"10.1016/j.jrt.2025.100124","url":null,"abstract":"<div><div>It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of <em>who</em> is responsible for AI, <em>senses</em> in which they are responsible, and <em>aspects of events</em> they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144739109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intersecting social identity and drone use in humanitarian contexts: Psychological insights for legal decisions and responsible innovation 人道主义背景下的交叉社会身份和无人机使用:法律决策和负责任创新的心理学见解
Pub Date : 2025-07-23 DOI: 10.1016/j.jrt.2025.100129
Anastasia Kordoni , Mark Levine , Amel Bennaceur , Carlos Gavidia-Calderon , Bashar Nuseibeh
While the technical and ethical challenges of using drones in Search-and-Rescue operations for transnationally displaced individuals have been explored, how drone footage can shape psychological processes at play and impact post-rescue legal decision-making has been overlooked. This paper investigates how transnationally displaced individuals' social identities are portrayed in court and the role of drone footage in reinforcing these identities. We conducted a discourse analysis of 11 open-access asylum and deportation cases following drone-assisted Search-and-Rescue operations at sea (2015–2021). Our results suggest two primary identity constructions: as victims and as traffickers, each underpinned by conflicting psychological processes. The defence portrayed the defendants through the lens of vulnerability, while the prosecution through unlawfulness. Psychological attributions of drone footage contributed differently to identity portrayal, influencing legal decisions regarding the status and entitlements of transnationally displaced individuals. We discuss the socio-ethical implications of these findings and propose a psychosocial account for responsible innovation in technology mediated humanitarian contexts.
虽然已经探索了在跨国流离失所者的搜救行动中使用无人机的技术和道德挑战,但无人机镜头如何塑造心理过程并影响救援后的法律决策却被忽视了。本文研究了跨国流离失所者的社会身份是如何在法庭上被描绘出来的,以及无人机镜头在强化这些身份方面的作用。我们对2015-2021年海上无人机辅助搜救行动后的11个开放庇护和驱逐案件进行了话语分析。我们的研究结果表明了两种主要的身份构建:作为受害者和作为贩运者,每一种身份都受到相互冲突的心理过程的支撑。辩方通过脆弱的镜头描绘被告,而控方则通过非法的镜头描绘被告。无人机录像的心理归因对身份刻画有不同的贡献,影响了有关跨国流离失所者地位和权利的法律决定。我们讨论了这些发现的社会伦理含义,并提出了在技术介导的人道主义背景下负责任的创新的社会心理解释。
{"title":"Intersecting social identity and drone use in humanitarian contexts: Psychological insights for legal decisions and responsible innovation","authors":"Anastasia Kordoni ,&nbsp;Mark Levine ,&nbsp;Amel Bennaceur ,&nbsp;Carlos Gavidia-Calderon ,&nbsp;Bashar Nuseibeh","doi":"10.1016/j.jrt.2025.100129","DOIUrl":"10.1016/j.jrt.2025.100129","url":null,"abstract":"<div><div>While the technical and ethical challenges of using drones in Search-and-Rescue operations for transnationally displaced individuals have been explored, how drone footage can shape psychological processes at play and impact post-rescue legal decision-making has been overlooked. This paper investigates how transnationally displaced individuals' social identities are portrayed in court and the role of drone footage in reinforcing these identities. We conducted a discourse analysis of 11 open-access asylum and deportation cases following drone-assisted Search-and-Rescue operations at sea (2015–2021). Our results suggest two primary identity constructions: as victims and as traffickers, each underpinned by conflicting psychological processes. The defence portrayed the defendants through the lens of vulnerability, while the prosecution through unlawfulness. Psychological attributions of drone footage contributed differently to identity portrayal, influencing legal decisions regarding the status and entitlements of transnationally displaced individuals. We discuss the socio-ethical implications of these findings and propose a psychosocial account for responsible innovation in technology mediated humanitarian contexts.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144724925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating the complexities of AI and digital governance: the 5W1H framework 驾驭人工智能和数字治理的复杂性:5W1H框架
Pub Date : 2025-07-18 DOI: 10.1016/j.jrt.2025.100127
S. Matthew Liao , Iskandar Haykel , Katherine Cheung , Taylor Matalon
As AI and digital technologies advance rapidly, governance frameworks struggle to keep pace with emerging applications and risks. This paper introduces a "5W1H" framework to systematically analyze AI governance proposals through six key questions: What should be regulated (data, algorithms, sectors, or risk levels), Why regulate (ethics, legal compliance, market failures, or national interests), Who should regulate (industry, government, or public stakeholders), When regulation should occur (upstream, downstream, or lifecycle approaches), Where it should take place (local, national, or international levels), and How it should be enacted (hard versus soft regulation). The framework is applied to compare the European Union's AI Act with the current U.S. regulatory landscape, revealing the EU's comprehensive, risk-based approach versus America's fragmented, sector-specific strategy. By providing a structured analytical tool, the 5W1H framework helps policymakers, researchers, and stakeholders navigate complex AI governance decisions and identify areas for improvement in existing regulatory approaches.
随着人工智能和数字技术的快速发展,治理框架难以跟上新兴应用和风险的步伐。本文引入了一个“5W1H”框架,通过六个关键问题系统地分析人工智能治理建议:应该监管什么(数据、算法、部门或风险水平)、为什么要监管(道德、法律合规、市场失灵或国家利益)、谁应该监管(行业、政府或公共利益相关者)、何时应该进行监管(上游、下游或生命周期方法)、应该在哪里进行监管(地方、国家或国际层面)以及应该如何实施监管(硬监管与软监管)。该框架用于比较欧盟的人工智能法案与当前美国的监管格局,揭示欧盟全面的、基于风险的方法与美国零散的、特定行业的战略。通过提供结构化的分析工具,5W1H框架可帮助政策制定者、研究人员和利益相关者进行复杂的人工智能治理决策,并确定现有监管方法中需要改进的领域。
{"title":"Navigating the complexities of AI and digital governance: the 5W1H framework","authors":"S. Matthew Liao ,&nbsp;Iskandar Haykel ,&nbsp;Katherine Cheung ,&nbsp;Taylor Matalon","doi":"10.1016/j.jrt.2025.100127","DOIUrl":"10.1016/j.jrt.2025.100127","url":null,"abstract":"<div><div>As AI and digital technologies advance rapidly, governance frameworks struggle to keep pace with emerging applications and risks. This paper introduces a \"5W1H\" framework to systematically analyze AI governance proposals through six key questions: <em>What</em> should be regulated (data, algorithms, sectors, or risk levels), <em>Why</em> regulate (ethics, legal compliance, market failures, or national interests), <em>Who</em> should regulate (industry, government, or public stakeholders), <em>When</em> regulation should occur (upstream, downstream, or lifecycle approaches), <em>Where</em> it should take place (local, national, or international levels), and <em>How</em> it should be enacted (hard versus soft regulation). The framework is applied to compare the European Union's AI Act with the current U.S. regulatory landscape, revealing the EU's comprehensive, risk-based approach versus America's fragmented, sector-specific strategy. By providing a structured analytical tool, the 5W1H framework helps policymakers, researchers, and stakeholders navigate complex AI governance decisions and identify areas for improvement in existing regulatory approaches.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144696721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A turning point in AI: Europe's human-centric approach to technology regulation 人工智能的转折点:欧洲以人为中心的技术监管方法
Pub Date : 2025-07-17 DOI: 10.1016/j.jrt.2025.100128
Yavuz Selim Balcioğlu , Ahmet Alkan Çelik , Erkut Altindağ
This article examines the European Union's Artificial Intelligence Act, a landmark legislation that sets forth comprehensive rules for the development, deployment, and governance of artificial intelligence technologies within the EU. Emphasizing a human-centric approach, the Act aims to ensure AI's safe use, protect fundamental rights, and foster innovation within a framework that supports economic growth. Through a detailed analysis, the article explores the Act's key provisions, including its risk-based approach, bans and restrictions on certain AI practices, and measures for safeguarding fundamental rights. It also discusses the potential impact on SMEs, the importance of balancing regulation with innovation, and the need for the Act to adapt in response to technological advancements. The role of stakeholders in ensuring the Act's successful implementation and the significance of this legislative milestone for the future of AI are highlighted. The article concludes with reflections on the opportunities the Act presents for ethical AI development and the challenges ahead in maintaining its relevance and efficacy in a rapidly evolving technological landscape.
本文研究了欧盟的人工智能法案,这是一项具有里程碑意义的立法,为欧盟内人工智能技术的开发、部署和治理制定了全面的规则。该法案强调以人为本的方法,旨在确保人工智能的安全使用,保护基本权利,并在支持经济增长的框架内促进创新。通过详细分析,本文探讨了该法案的关键条款,包括其基于风险的方法,对某些人工智能实践的禁止和限制,以及维护基本权利的措施。它还讨论了对中小企业的潜在影响,平衡监管与创新的重要性,以及该法案适应技术进步的必要性。报告强调了利益相关者在确保法案成功实施方面的作用,以及这一立法里程碑对人工智能未来的重要性。文章最后反思了该法案为道德人工智能发展带来的机遇,以及在快速发展的技术环境中保持其相关性和有效性所面临的挑战。
{"title":"A turning point in AI: Europe's human-centric approach to technology regulation","authors":"Yavuz Selim Balcioğlu ,&nbsp;Ahmet Alkan Çelik ,&nbsp;Erkut Altindağ","doi":"10.1016/j.jrt.2025.100128","DOIUrl":"10.1016/j.jrt.2025.100128","url":null,"abstract":"<div><div>This article examines the European Union's Artificial Intelligence Act, a landmark legislation that sets forth comprehensive rules for the development, deployment, and governance of artificial intelligence technologies within the EU. Emphasizing a human-centric approach, the Act aims to ensure AI's safe use, protect fundamental rights, and foster innovation within a framework that supports economic growth. Through a detailed analysis, the article explores the Act's key provisions, including its risk-based approach, bans and restrictions on certain AI practices, and measures for safeguarding fundamental rights. It also discusses the potential impact on SMEs, the importance of balancing regulation with innovation, and the need for the Act to adapt in response to technological advancements. The role of stakeholders in ensuring the Act's successful implementation and the significance of this legislative milestone for the future of AI are highlighted. The article concludes with reflections on the opportunities the Act presents for ethical AI development and the challenges ahead in maintaining its relevance and efficacy in a rapidly evolving technological landscape.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft law for unintentional empathy: addressing the governance gap in emotion-recognition AI technologies 无意共情的软法律:解决情感识别人工智能技术的治理差距
Pub Date : 2025-07-16 DOI: 10.1016/j.jrt.2025.100126
Andrew McStay , Vian Bakir
Despite regulatory efforts, there is a significant governance gap in managing emotion recognition AI technologies and those that emulate empathy. This paper asks: should international soft law mechanisms, such as ethical standards, complement hard law in addressing governance gaps in emotion recognition and empathy-emulating AI technologies? To argue that soft law can provide detailed guidance, particularly for research ethics committees and related boards advising on these technologies, the paper first explores how legal definitions of emotion recognition, especially in the EU AI Act, rest on reductive and physiognomic criticism of emotion recognition. It progresses to detail that systems may be designed to intentionally empathise with their users, but also that empathy may be unintentional – or effectively incidental to how these systems work. Approaches that are non-reductive and avoid labelling of emotion as conceived in the EU AI Act raises novel governance questions and physiognomic critique of a more dynamic nature. The paper finds that international soft law can complement hard law, especially when critique is subtle but significant, when guidance is anticipatory in nature, and when detailed recommendations for developers are required.
尽管有监管方面的努力,但在管理情感识别人工智能技术和模仿同理心的人工智能技术方面,仍存在重大的治理差距。本文提出的问题是:国际软法律机制,如道德标准,是否应该补充硬法律,以解决情感识别和移情模拟人工智能技术的治理差距?为了证明软法律可以提供详细的指导,特别是对于研究伦理委员会和相关委员会就这些技术提供建议,本文首先探讨了情感识别的法律定义,特别是在欧盟人工智能法案中,如何依赖于对情感识别的还原和面相学批评。它进一步深入到细节,系统可能被设计为有意地同情用户,但同理心也可能是无意的——或者是这些系统如何工作的有效附带。非简化的方法,避免了欧盟人工智能法案中设想的情感标签,提出了新的治理问题和对更动态性质的面相批评。本文发现,国际软法可以补充硬法,特别是当批评微妙但意义重大时,当指导本质上具有预见性时,以及当需要为开发人员提供详细建议时。
{"title":"Soft law for unintentional empathy: addressing the governance gap in emotion-recognition AI technologies","authors":"Andrew McStay ,&nbsp;Vian Bakir","doi":"10.1016/j.jrt.2025.100126","DOIUrl":"10.1016/j.jrt.2025.100126","url":null,"abstract":"<div><div>Despite regulatory efforts, there is a significant governance gap in managing emotion recognition AI technologies and those that emulate empathy. This paper asks: should international soft law mechanisms, such as ethical standards, complement hard law in addressing governance gaps in emotion recognition and empathy-emulating AI technologies? To argue that soft law can provide detailed guidance, particularly for research ethics committees and related boards advising on these technologies, the paper first explores how legal definitions of emotion recognition, especially in the EU AI Act, rest on reductive and physiognomic criticism of emotion recognition. It progresses to detail that systems may be designed to intentionally empathise with their users, but also that empathy may be unintentional – or effectively incidental to how these systems work. Approaches that are non-reductive and avoid labelling of emotion as conceived in the EU AI Act raises novel governance questions and physiognomic critique of a more dynamic nature. The paper finds that international soft law can complement hard law, especially when critique is subtle but significant, when guidance is anticipatory in nature, and when detailed recommendations for developers are required.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ten simple guidelines for decolonising algorithmic systems 非殖民化算法系统的十条简单准则
Pub Date : 2025-07-15 DOI: 10.1016/j.jrt.2025.100125
Dion R.J. O’Neale , Daniel Wilson , Paul T. Brown , Pascarn Dickinson , Manakore Rikus-Graham , Asia Ropeti
As the scope and prevalence of algorithmic systems and artificial intelligence for decision making expand, there is a growing understanding of the need for approaches to help with anticipating adverse consequences and to support the development and deployment of algorithmic systems that are socially responsible and ethically aware. This has led to increasing interest in "decolonising" algorithmic systems as a method of managing and mitigating harms and biases from algorithms and for supporting social benefits from algorithmic decision making for Indigenous peoples.
This article presents ten simple guidelines for giving practical effect to foundational Māori (the Indigenous people of Aotearoa New Zealand) principles in the design, deployment, and operation of algorithmic systems. The guidelines are based on previously established literature regarding ethical use of Māori data. Where possible we have related these guidelines and recommendations to other development practices, for example, to open-source software.
While not intended to be exhaustive or extensive, we hope that these guidelines are able to facilitate and encourage those who work with Māori data in algorithmic systems to engage with processes and practices that support culturally appropriate and ethical approaches for algorithmic systems.
随着用于决策的算法系统和人工智能的范围和流行程度的扩大,人们越来越认识到需要一些方法来帮助预测不利后果,并支持对社会负责和有道德意识的算法系统的开发和部署。这导致人们对“非殖民化”算法系统越来越感兴趣,将其作为一种管理和减轻算法带来的危害和偏见的方法,并支持土著人民从算法决策中获得社会效益。本文提出了十个简单的指导方针,在设计、部署和操作算法系统时,为基础的Māori(新西兰Aotearoa土著人)原则提供实际效果。该指南基于先前建立的关于Māori数据伦理使用的文献。在可能的情况下,我们将这些指导方针和建议与其他开发实践联系起来,例如,与开源软件联系起来。虽然不打算详尽或广泛,但我们希望这些指南能够促进和鼓励那些在算法系统中使用Māori数据的人参与支持文化上适当和道德的算法系统方法的流程和实践。
{"title":"Ten simple guidelines for decolonising algorithmic systems","authors":"Dion R.J. O’Neale ,&nbsp;Daniel Wilson ,&nbsp;Paul T. Brown ,&nbsp;Pascarn Dickinson ,&nbsp;Manakore Rikus-Graham ,&nbsp;Asia Ropeti","doi":"10.1016/j.jrt.2025.100125","DOIUrl":"10.1016/j.jrt.2025.100125","url":null,"abstract":"<div><div>As the scope and prevalence of algorithmic systems and artificial intelligence for decision making expand, there is a growing understanding of the need for approaches to help with anticipating adverse consequences and to support the development and deployment of algorithmic systems that are socially responsible and ethically aware. This has led to increasing interest in \"decolonising\" algorithmic systems as a method of managing and mitigating harms and biases from algorithms and for supporting social benefits from algorithmic decision making for Indigenous peoples.</div><div>This article presents ten simple guidelines for giving practical effect to foundational Māori (the Indigenous people of Aotearoa New Zealand) principles in the design, deployment, and operation of algorithmic systems. The guidelines are based on previously established literature regarding ethical use of Māori data. Where possible we have related these guidelines and recommendations to other development practices, for example, to open-source software.</div><div>While not intended to be exhaustive or extensive, we hope that these guidelines are able to facilitate and encourage those who work with Māori data in algorithmic systems to engage with processes and practices that support culturally appropriate and ethical approaches for algorithmic systems.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Participatory research in low resource settings - Endeavours in epistemic justice at the Banyan, India 参与性研究在低资源设置-在榕树认识正义的努力,印度
Pub Date : 2025-06-24 DOI: 10.1016/j.jrt.2025.100123
Mrinalini Ravi , Swarna Tyagi , Vandana Gopikumar , Emma Emily de Wit , Joske Bunders , Deborah Padgett , Barbara Regeer
Involving persons with lived experience in knowledge generation through participatory research (PR) has become increasingly important to challenge power structures in knowledge production and research. In the case of persons with lived experiences of mental illness, participatory research has gained popularity since the early 70 s, but there is little empirical work from countries like India on how PR can be implemented in psychiatric settings.
This study focuses on exploring the way persons with lived experiences of mental illness can be engaged as peer researchers in a service utilisation audit of The Banyan’s inpatient, outpatient and inclusive living facilities. The audit was an attempt by The Banyan to co-opt clients as peer-researchers, thereby enhancing participatory approaches to care planning and provision. Notes and transcripts of research process activities (three meetings for training purposes), 180 interviews as part of the audit, as well as follow up Focus Group Discussions (n = 4) conducted with 18 peer researchers, were used to document their experiences and gather feedback on the training and research process.
We foundthat, reflected against the lack of formal education in the past, the opportunity and support received to be part of a research endeavour, elicited a sense of pride, relief, and liberation in peer researchers. Additionally, actualising the role of an academic and researcher, and not just being passive responders to people in positions of intellectual and systemic power, engendered a sense of responsibility and accountability to peer researchers, and to the mental health system. Thirdly, supporting persons with experiences of mental illness in participatory research activities, especially in the context of low resource settings, requires specific consideration of practical conditions and adjustments needed to avoid tokenism. Finally, both peer- and staff researchers spoke about persisting hierarchies between them which deserve attention.
We conclude that participatory research has a significant scope amongst clients from disadvantaged communities in low-resource settings. Respondents repeatedly expressed an urgency for persons with lived experience to contribute to mental health pedagogy, and, in so doing, disrupt archaic treatment approaches.. Experiences from this enquiry also call for a rethink on how training in research can be developed for individuals without formal education and with cognitive difficulties, with the help of auditory support systemssuch that key concepts are available and accessible, and long-term memory becomes less of a deterrent in the pursuit of knowledge and truth.
通过参与式研究(PR)让有经验的人参与知识生成,对于挑战知识生产和研究中的权力结构变得越来越重要。就有精神疾病经历的人而言,参与性研究自70年代初开始流行,但印度等国家很少有关于如何在精神疾病环境中实施PR的实证研究。本研究的重点是探索有精神疾病生活经历的人可以作为同行研究人员参与榕树医院住院、门诊和包容性生活设施的服务利用审计。这次审计是The Banyan的一次尝试,旨在让客户成为同行研究人员,从而加强护理计划和提供的参与式方法。研究过程活动的笔记和抄本(三次培训会议)、作为审计一部分的180次访谈以及与18名同行研究人员进行的后续焦点小组讨论(n = 4)被用来记录他们的经验并收集关于培训和研究过程的反馈。我们发现,与过去缺乏正规教育相比,参与研究工作的机会和支持让同行研究人员感到自豪、宽慰和解放。此外,实现学术和研究人员的角色,而不仅仅是对智力和系统权力职位上的人的被动反应,产生了对同行研究人员和精神卫生系统的责任感和责任感。第三,在参与性研究活动中支持有精神疾病经历的人,特别是在资源匮乏的情况下,需要具体考虑实际情况和必要的调整,以避免表面现象。最后,同事和员工研究人员都谈到了他们之间持续存在的等级制度,这值得关注。我们的结论是,参与式研究在资源匮乏的弱势社区的客户中有很大的应用范围。答复者一再表示,迫切需要有实际经验的人为精神卫生教育学做出贡献,并在这样做时打破陈旧的治疗方法。这项调查的经验也要求我们重新思考,在听觉支持系统的帮助下,如何为没有受过正规教育和有认知困难的个人开发研究培训,这样关键概念就可以获得和理解,长期记忆在追求知识和真理的过程中就不再是一种阻碍。
{"title":"Participatory research in low resource settings - Endeavours in epistemic justice at the Banyan, India","authors":"Mrinalini Ravi ,&nbsp;Swarna Tyagi ,&nbsp;Vandana Gopikumar ,&nbsp;Emma Emily de Wit ,&nbsp;Joske Bunders ,&nbsp;Deborah Padgett ,&nbsp;Barbara Regeer","doi":"10.1016/j.jrt.2025.100123","DOIUrl":"10.1016/j.jrt.2025.100123","url":null,"abstract":"<div><div>Involving persons with lived experience in knowledge generation through participatory research (PR) has become increasingly important to challenge power structures in knowledge production and research. In the case of persons with lived experiences of mental illness, participatory research has gained popularity since the early 70 s, but there is little empirical work from countries like India on how PR can be implemented in psychiatric settings.</div><div>This study focuses on exploring the way persons with lived experiences of mental illness can be engaged as peer researchers in a service utilisation audit of The Banyan’s inpatient, outpatient and inclusive living facilities. The audit was an attempt by The Banyan to co-opt clients as peer-researchers, thereby enhancing participatory approaches to care planning and provision. Notes and transcripts of research process activities (three meetings for training purposes), 180 interviews as part of the audit, as well as follow up Focus Group Discussions (<em>n</em> = 4) conducted with 18 peer researchers, were used to document their experiences and gather feedback on the training and research process.</div><div>We foundthat, reflected against the lack of formal education in the past, the opportunity and support received to be part of a research endeavour, elicited a sense of pride, relief, and liberation in peer researchers. Additionally, actualising the role of an academic and researcher, and not just being passive responders to people in positions of intellectual and systemic power, engendered a sense of responsibility and accountability to peer researchers, and to the mental health system. Thirdly, supporting persons with experiences of mental illness in participatory research activities, especially in the context of low resource settings, requires specific consideration of practical conditions and adjustments needed to avoid tokenism. Finally, both peer- and staff researchers spoke about persisting hierarchies between them which deserve attention.</div><div>We conclude that participatory research has a significant scope amongst clients from disadvantaged communities in low-resource settings. Respondents repeatedly expressed an urgency for persons with lived experience to contribute to mental health pedagogy, and, in so doing, disrupt archaic treatment approaches.. Experiences from this enquiry also call for a rethink on how training in research can be developed for individuals without formal education and with cognitive difficulties, with the help of auditory support systemssuch that key concepts are available and accessible, and long-term memory becomes less of a deterrent in the pursuit of knowledge and truth.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A capability approach to ethical development and internal auditing of AI technology 人工智能技术伦理发展和内部审计的能力方法
Pub Date : 2025-06-01 DOI: 10.1016/j.jrt.2025.100121
Mark Graves , Emanuele Ratti
Responsible artificial intelligence (AI) requires integrating ethical awareness into the full process of designing and developing AI, including ethics-based auditing of AI technology. We claim the Capability Approach (CA) of Sen and Nussbaum grounds AI ethics in essential human freedoms and can increase awareness of the moral dimension in the technical decision making of developers and data scientists constructing data-centric AI systems. Our use of CA focuses awareness on the ethical impact that day-to-day technical decisions have on the freedom of data subjects to make choices and live meaningful lives according to their own values. For internal auditing of AI technology development, we design and develop a light-weight ethical auditing tool (LEAT) that uses simple natural language processing (NLP) techniques to search design and development documents for relevant ethical characterizations. We describe how CA guides our design, demonstrate LEAT on both principle- and capabilities-based use cases, and characterize its limitations.
负责任的人工智能(AI)需要将道德意识融入人工智能设计和开发的全过程,包括对人工智能技术进行基于道德的审计。我们声称,Sen和Nussbaum的能力方法(CA)将人工智能伦理建立在基本的人类自由之上,可以提高开发人员和数据科学家在构建以数据为中心的人工智能系统的技术决策中的道德维度意识。我们使用CA的重点是关注日常技术决策对数据主体根据自己的价值观做出选择和过有意义的生活的自由产生的道德影响。对于人工智能技术开发的内部审计,我们设计并开发了一个轻量级的道德审计工具(LEAT),该工具使用简单的自然语言处理(NLP)技术来搜索设计和开发文档以获取相关的道德特征。我们描述了CA如何指导我们的设计,在基于原则和基于功能的用例上演示了LEAT,并描述了它的局限性。
{"title":"A capability approach to ethical development and internal auditing of AI technology","authors":"Mark Graves ,&nbsp;Emanuele Ratti","doi":"10.1016/j.jrt.2025.100121","DOIUrl":"10.1016/j.jrt.2025.100121","url":null,"abstract":"<div><div>Responsible artificial intelligence (AI) requires integrating ethical awareness into the full process of designing and developing AI, including ethics-based auditing of AI technology. We claim the Capability Approach (CA) of Sen and Nussbaum grounds AI ethics in essential human freedoms and can increase awareness of the moral dimension in the technical decision making of developers and data scientists constructing data-centric AI systems. Our use of CA focuses awareness on the ethical impact that day-to-day technical decisions have on the freedom of data subjects to make choices and live meaningful lives according to their own values. For internal auditing of AI technology development, we design and develop a light-weight ethical auditing tool (LEAT) that uses simple natural language processing (NLP) techniques to search design and development documents for relevant ethical characterizations. We describe how CA guides our design, demonstrate LEAT on both principle- and capabilities-based use cases, and characterize its limitations.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1