首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Data Hazards: An open-source vocabulary of ethical hazards for data-intensive projects 数据危害:数据密集型项目的道德危害的开源词汇
Pub Date : 2025-02-12 DOI: 10.1016/j.jrt.2025.100110
Natalie Zelenka , Nina H. Di Cara , Euan Bennet , Phil Clatworthy , Huw Day , Ismael Kherroubi Garcia , Susana Roman Garcia , Vanessa Aisyahsari Hanschke , Emma Siân Kuwertz
Understanding the potential for downstream harms from data-intensive technologies requires strong collaboration across disciplines and with the public. Having shared vocabularies of concerns reduces the communication barriers inherent in this work. The Data Hazards project (datahazards.com) contains an open-source, controlled vocabulary of 11 hazards associated with data science work, presented as ‘labels’. Each label has (i) an icon, (ii) a description, (iii) examples, and, crucially, (iv) suggested safety precautions. A reflective discussion format and resources have also been developed. These have been created over three years with feedback from interdisciplinary contributors, and their use evaluated by participants (N=47). The labels include concerns often out-of-scope for ethics committees, like environmental impact. The resources can be used as a structure for interdisciplinary harms discovery work, for communicating hazards, collecting public input or in educational settings. Future versions of the project will develop through feedback from open-source contributions, methodological research and outreach.
了解数据密集型技术对下游的潜在危害,需要跨学科和与公众的强有力合作。拥有共享的关注词汇表可以减少工作中固有的沟通障碍。数据危害项目(datahazards s.com)包含与数据科学工作相关的11种危害的开源、受控词汇表,以“标签”的形式呈现。每个标签都有(i)图标,(ii)描述,(iii)示例,最重要的是,(iv)建议的安全预防措施。还开发了一种反思性讨论形式和资源。这些都是根据跨学科贡献者的反馈创建的,并由参与者评估其使用情况(N=47)。这些标签包括一些通常不在伦理委员会考虑范围之内的问题,比如环境影响。这些资源可以用作跨学科危害发现工作的结构,用于传播危害,收集公众意见或用于教育环境。该项目的未来版本将通过来自开源贡献、方法论研究和推广的反馈来开发。
{"title":"Data Hazards: An open-source vocabulary of ethical hazards for data-intensive projects","authors":"Natalie Zelenka ,&nbsp;Nina H. Di Cara ,&nbsp;Euan Bennet ,&nbsp;Phil Clatworthy ,&nbsp;Huw Day ,&nbsp;Ismael Kherroubi Garcia ,&nbsp;Susana Roman Garcia ,&nbsp;Vanessa Aisyahsari Hanschke ,&nbsp;Emma Siân Kuwertz","doi":"10.1016/j.jrt.2025.100110","DOIUrl":"10.1016/j.jrt.2025.100110","url":null,"abstract":"<div><div>Understanding the potential for downstream harms from data-intensive technologies requires strong collaboration across disciplines and with the public. Having shared vocabularies of concerns reduces the communication barriers inherent in this work. The Data Hazards project (<span><span>datahazards.com</span><svg><path></path></svg></span>) contains an open-source, controlled vocabulary of 11 hazards associated with data science work, presented as ‘labels’. Each label has (i) an icon, (ii) a description, (iii) examples, and, crucially, (iv) suggested safety precautions. A reflective discussion format and resources have also been developed. These have been created over three years with feedback from interdisciplinary contributors, and their use evaluated by participants (N=47). The labels include concerns often out-of-scope for ethics committees, like environmental impact. The resources can be used as a structure for interdisciplinary harms discovery work, for communicating hazards, collecting public input or in educational settings. Future versions of the project will develop through feedback from open-source contributions, methodological research and outreach.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100110"},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research 医疗保健研究中的人工智能时代:对2020年至2024年提交给爱沙尼亚生物伦理和人类研究委员会的项目的分析
Pub Date : 2025-02-07 DOI: 10.1016/j.jrt.2025.100113
Aive Pevkur , Kadi Lubi
The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.
卫生保健研究项目的伦理评价是对研究参与者权利的保障。与此同时,大健康数据和人工智能分析的使用正在增加。一个关键问题是,包括伦理委员会在内的现有措施是否能够胜任地评估涉及人工智能的卫生项目并预见风险。我们的研究旨在确定和描述2020年1月至2024年4月期间提交给爱沙尼亚生物伦理与人类研究委员会(EBIN)的研究项目类型,并分析近年来的人工智能用例。值得注意的是,该委员会是在人工智能在卫生研究中的应用显著增加之前成立的。我们使用演绎和归纳方法对提交文件进行了定量和定性的内容分析,以收集有关使用人工智能的研究类型的信息,并就评估项目的准备情况得出一些初步结论。结果表明,大多数应用程序来自大学,在研究中使用不同的数据源,人工智能的使用相当统一,并且应用程序在利用人工智能能力方面没有表现出多样性。
{"title":"The age of AI in healthcare research: An analysis of projects submitted between 2020 and 2024 to the Estonian committee on Bioethics and Human Research","authors":"Aive Pevkur ,&nbsp;Kadi Lubi","doi":"10.1016/j.jrt.2025.100113","DOIUrl":"10.1016/j.jrt.2025.100113","url":null,"abstract":"<div><div>The ethical evaluation of healthcare research projects ensures the protection of the study participants’ rights. Concurrently, the use of big health data and AI analysis is rising. A critical question is whether existing measures, including ethics committees, can competently evaluate AI-involved health projects and foresee risks. Our research aimed to identify and describe the types of research projects submitted between January 2020 and April 2024 to the Estonian Council for Bioethics and Human Research (EBIN) and to analyse AI use cases in recent years. Notably, the committee was established before the significant rise in AI usage in health research. We conducted a quantitative and qualitative content analysis of submission documents using deductive and inductive approaches to gather information on the types of studies using AI and make some preliminary conclusions on readiness to evaluate projects. Results indicate that most applications come from universities, use diverse data sources in the research and the use of AI is rather uniform, and the applications do not exhibit diversity in the utilisation of AI capabilities.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research ethics committees as knowledge gatekeepers: The impact of emerging technologies on social science research 作为知识看门人的研究伦理委员会:新兴技术对社会科学研究的影响
Pub Date : 2025-02-07 DOI: 10.1016/j.jrt.2025.100112
Anu Masso , Jevgenia Gerassimenko , Tayfun Kasapoglu , Mai Beilmann
This article investigates the evolution of research ethics within the social sciences, emphasising the shift from procedural norms borrowed from medical and natural sciences to social scientific discipline-specific and method-based principles. This transformation acknowledges the unique challenges and opportunities in social science research, particularly in the context of emerging data technologies such as digital data, algorithms, and artificial intelligence. Our empirical analysis, based on a survey conducted among international social scientists (N = 214), highlights the precariousness researchers face regarding these technological shifts. Traditional methods remain prevalent, despite the recognition of new digital methodologies that necessitate new ethical principles. We discuss the role of ethics committees as influential gatekeepers, examining power dynamics and access to knowledge within the research landscape. The findings underscore the need for tailored ethical guidelines that accommodate diverse methodological approaches, advocate for interdisciplinary dialogue, and address inequalities in knowledge production. This article contributes to the broader understanding of evolving research ethics in an increasingly data-driven world.
本文探讨了社会科学中研究伦理的演变,强调了从借鉴医学和自然科学的程序规范到社会科学学科特定和基于方法的原则的转变。这种转变承认了社会科学研究中的独特挑战和机遇,特别是在数字数据、算法和人工智能等新兴数据技术的背景下。我们的实证分析基于对国际社会科学家(N = 214)进行的一项调查,强调了研究人员在这些技术变革方面面临的不稳定性。尽管认识到新的数字方法需要新的道德原则,但传统方法仍然普遍存在。我们将讨论伦理委员会作为有影响力的看门人的角色,检查研究领域内的权力动态和知识获取。研究结果强调需要制定有针对性的伦理准则,以适应不同的方法方法,倡导跨学科对话,并解决知识生产中的不平等问题。本文有助于更广泛地理解在日益数据驱动的世界中不断发展的研究伦理。
{"title":"Research ethics committees as knowledge gatekeepers: The impact of emerging technologies on social science research","authors":"Anu Masso ,&nbsp;Jevgenia Gerassimenko ,&nbsp;Tayfun Kasapoglu ,&nbsp;Mai Beilmann","doi":"10.1016/j.jrt.2025.100112","DOIUrl":"10.1016/j.jrt.2025.100112","url":null,"abstract":"<div><div>This article investigates the evolution of research ethics within the social sciences, emphasising the shift from procedural norms borrowed from medical and natural sciences to social scientific discipline-specific and method-based principles. This transformation acknowledges the unique challenges and opportunities in social science research, particularly in the context of emerging data technologies such as digital data, algorithms, and artificial intelligence. Our empirical analysis, based on a survey conducted among international social scientists (N = 214), highlights the precariousness researchers face regarding these technological shifts. Traditional methods remain prevalent, despite the recognition of new digital methodologies that necessitate new ethical principles. We discuss the role of ethics committees as influential gatekeepers, examining power dynamics and access to knowledge within the research landscape. The findings underscore the need for tailored ethical guidelines that accommodate diverse methodological approaches, advocate for interdisciplinary dialogue, and address inequalities in knowledge production. This article contributes to the broader understanding of evolving research ethics in an increasingly data-driven world.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan 走向屏幕人类学。展示和隐藏,暴露和保护。Mauro Carbone和Graziano Lingua。萨拉·德·桑蒂斯译,2023年。Cham: Palgrave Macmillan
Pub Date : 2025-02-06 DOI: 10.1016/j.jrt.2025.100111
Paul Trauttmansdorff
Toward an Anthropology of Screens by Mauro Carbone and Graziano Lingua is an insightful book about the cultural and philosophical significance of screens, which highlights their role in mediating human interactions, reshaping relationships with people and artefacts, and raising ethical questions about their pervasive influence in contemporary life.
Mauro Carbone和Graziano Lingua的《走向屏幕人类学》是一本关于屏幕的文化和哲学意义的深刻著作,强调了它们在调解人类互动、重塑与人与人工制品的关系方面的作用,并提出了它们在当代生活中无处不在的影响的伦理问题。
{"title":"Toward an anthropology of screens. Showing and hiding, exposing and protecting. Mauro Carbone and Graziano Lingua. Translated by Sarah De Sanctis. 2023. Cham: Palgrave Macmillan","authors":"Paul Trauttmansdorff","doi":"10.1016/j.jrt.2025.100111","DOIUrl":"10.1016/j.jrt.2025.100111","url":null,"abstract":"<div><div>Toward an Anthropology of Screens by Mauro Carbone and Graziano Lingua is an insightful book about the cultural and philosophical significance of screens, which highlights their role in mediating human interactions, reshaping relationships with people and artefacts, and raising ethical questions about their pervasive influence in contemporary life.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100111"},"PeriodicalIF":0.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring research practices with non-native English speakers: A reflective case study 探索非英语母语人士的研究实践:一个反思性案例研究
Pub Date : 2025-02-05 DOI: 10.1016/j.jrt.2025.100109
Marilys Galindo, Teresa Solorzano, Julie Neisler
Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.
我们学习和工作的生活经历是个人的,与我们的种族、民族和文化身份和需求有关。这对非英语母语的研究参与者尤其重要,因为英语是美国学习、工作和技术设计的主导语言。一种反思的方法被用来批评研究实践,作者参与与英语优先和西班牙语优先的学习者和工人共同设计。本案例研究探讨了设计学习和就业创新,以最好地支持非英语母语学习者和工人在职业道路上的过渡。从数据中产生了三个主题:参与者报告了帮助的意愿、表达的自主性和共同设计过程中的包容性。从这一批评中,研究人员开发了一个结构,以指导决策,并告知在实践中更加公平和包容非英语母语参与者的方法。
{"title":"Exploring research practices with non-native English speakers: A reflective case study","authors":"Marilys Galindo,&nbsp;Teresa Solorzano,&nbsp;Julie Neisler","doi":"10.1016/j.jrt.2025.100109","DOIUrl":"10.1016/j.jrt.2025.100109","url":null,"abstract":"<div><div>Our lived experiences of learning and working are personal and connected to our racial, ethnic, and cultural identities and needs. This is especially important for non-native English-speaking research participants, as English is the dominant language for learning, working, and the design of the technologies that support them in the United States. A reflective approach was used to critique the research practices that the authors were involved in co-designing with English-first and Spanish-first learners and workers. This case study explored designing learning and employment innovations to best support non-native English-speaking learners and workers during transitions along their career pathways. Three themes were generated from the data: the participants reported feeling the willingness to help, the autonomy of expression, and inclusiveness in the co-design process. From this critique, a structure was developed for researchers to guide decision-making and to inform ways of being more equitable and inclusive of non-native English-speaking participants in their practices.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100109"},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Process industry disrupted: AI and the need for human orchestration 流程工业被颠覆:人工智能和对人工编排的需求
Pub Date : 2025-01-29 DOI: 10.1016/j.jrt.2025.100105
M.W. Vegter , V. Blok , R. Wesselink
According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue human-centered AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.
根据欧盟政策制定者的说法,在过程工业中引入人工智能将有助于大型制造公司变得更具可持续性。与此同时,对这些行业未来工作的担忧也出现了。由于欧盟也希望积极追求以人为本的人工智能,这就提出了一个问题,即如何在过程工业中以可持续的方式实施人工智能,并考虑到该部门工人的观点和利益。为了提供答案,我们进行了涉及实证研究的“伦理平行研究”。我们对流程工业中的人工智能发展进行了人种学研究,并专门研究了两家制造工厂的创新过程。我们展示了各自工作相关职责带来的微妙但重要的差异。工程师们不断地把工厂改造成一个技术系统;操作人员与现场生产过程保持相当共生的关系。在技术-道德变革的不同机制框架的基础上,我们强调了人工智能可能对工人产生道德影响的三种方式。1. 决断性-在开发数据分析工具的同时,还决定了各自的角色和职责;2. 关系型——数据分析工具可能会加剧权力失衡,工程师可能会重新编写操作员的工作脚本;3. 感知-数据分析技术调解感知,从而改变关系运营商有生产过程。在工业4.0中,这个问题被定义为“次优使用”,而在工业5.0中,这个问题应该被认为是“次优开发”。
{"title":"Process industry disrupted: AI and the need for human orchestration","authors":"M.W. Vegter ,&nbsp;V. Blok ,&nbsp;R. Wesselink","doi":"10.1016/j.jrt.2025.100105","DOIUrl":"10.1016/j.jrt.2025.100105","url":null,"abstract":"<div><div>According to EU policy makers, the introduction of AI within Process Industry will help big manufacturing companies to become more sustainable. At the same time, concerns arise about future work in these industries. As the EU also wants to actively pursue <em>human-centered</em> AI, this raises the question how to implement AI within Process Industry in a way that is sustainable and takes views and interests of workers in this sector into account. To provide an answer, we conducted ‘ethics parallel research’ which involves empirical research. We conducted an ethnographic study of AI development within process industry and specifically looked into the innovation process in two manufacturing plants. We showed subtle but important differences that come with the respective job related duties. While engineers continuously alter the plant as being a technical system; operators hold a rather symbiotic relationship with the production process on site. Building on the framework of different mechanisms of techno-moral change we highlight three ways in which workers might be morally impacted by AI. 1. Decisional - alongside the developmental of data analytic tools respective roles and duties are being decided; 2. Relational - Data analytic tools might exacerbate a power imbalance where engineers may re-script the work of operators; 3. Perceptual - Data analytic technologies mediate perceptions thus changing the relationship operators have to the production process. While in Industry 4.0 the problem is framed in terms of ‘suboptimal use’, in Industry 5.0 the problem should be thought of as ‘suboptimal development’.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human centred explainable AI decision-making in healthcare 医疗保健中以人为中心的可解释人工智能决策
Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100108
Catharina M. van Leersum , Clara Maathuis
Human-centred AI (HCAI1) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on explainable AI (XAI2) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.
A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called human-centred explainable AI (HCXAI3). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.
To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.
以人为本的人工智能(HCAI1)意味着构建人工智能系统,通过协助、互动和与人类合作,理解人类的目标、需求和期望。进一步关注可解释的人工智能(XAI2),可以在人工智能系统做出的数据、推理和决策中收集洞察力,促进人类的理解、信任,并有助于识别错误和偏见等问题。虽然目前的XAI方法主要侧重于技术,但为了能够理解环境和人类动态,跨学科视角和社会技术方法是必要的。这一事实在医疗保健领域至关重要,因为各种风险可能意味着对人类生命安全和医疗设备的严重后果。技术进步和人为因素共同进化的反思性伦理和社会技术视角被称为以人为中心的可解释人工智能(HCXAI3)。这种观点将人类置于人工智能设计的中心,对人工智能系统的价值观、人际关系动态和社会定位本质有全面的理解。在医疗保健领域,据我们所知,关于应用HCXAI的知识有限,伦理风险未知,并且不清楚决策中需要哪些可解释性元素来密切模仿人类决策。此外,不同的利益相关者有不同的解释需求,因此HCXAI可以是一个解决方案,侧重于人道的伦理决策,而不是纯粹的技术选择。为了解决这一知识鸿沟,本文旨在设计一个可操作的HCXAI道德框架,采用跨学科方法,将来自AI、XAI、HCXAI、设计科学和医疗保健领域的学术和实践知识和专业知识结合起来。为了证明建议的可操作框架在真实场景和设置中的适用性,同时反映人类决策,考虑了两个用例。第一个是基于人工智能的核磁共振扫描的解释,第二个是关于智能地板的应用。
{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum ,&nbsp;Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":"10.1016/j.jrt.2025.100108","url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized governance in action: A governance framework of digital responsibility in startups 行动中的分散治理:创业公司数字责任的治理框架
Pub Date : 2025-01-10 DOI: 10.1016/j.jrt.2025.100107
Yangyang Zhao , Jiajun Qiu
The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.
数字技术的兴起推动了创业公司分散式治理的出现。然而,这一趋势给数字负责任的治理带来了新的挑战,例如技术使用、业务问责制和许多其他问题,特别是在缺乏明确指导方针的情况下。本文探讨了两种具有分散治理的数字创业公司:数字化转型(例如DAO)和it支持的分散创业公司。我们将之前描述的企业数字责任模型改编为一个简化的七集群治理框架,该框架更直接适用于这些新型组织。通过一个案例研究,我们说明了概念框架的实用价值,并找到了去中心化创业公司对数字负责任治理至关重要的关键点。我们的研究结果为未来对分散环境下的数字责任问题进行深入和跨学科的调查奠定了概念和实证基础。
{"title":"Decentralized governance in action: A governance framework of digital responsibility in startups","authors":"Yangyang Zhao ,&nbsp;Jiajun Qiu","doi":"10.1016/j.jrt.2025.100107","DOIUrl":"10.1016/j.jrt.2025.100107","url":null,"abstract":"<div><div>The rise of digital technologies has fueled the emergence of decentralized governance among startups. However, this trend imposes new challenges in digitally responsible governance, such as technology usage, business accountability, and many other issues, particularly in the absence of clear guidelines. This paper explores two types of digital startups with decentralized governance: digitally transformed (e.g., DAO) and IT-enabled decentralized startups. We adapt the previously described Corporate Digital Responsibility model into a streamlined seven-cluster governance framework that is more directly applicable to these novel organizations. Through a case study, we illustrate the practical value of the conceptual framework and find key points vital for digitally responsible governance by decentralized startups. Our findings lay a conceptual and empirical groundwork for in-depth and cross-disciplinary future inquiries into digital responsibility issues in decentralized settings.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring expert and public perceptions of answerability and trustworthy autonomous systems 探索专家和公众对可回答性和可信赖的自治系统的看法
Pub Date : 2025-01-09 DOI: 10.1016/j.jrt.2025.100106
Louise Hatherall, Nayha Sethi
The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.
解决自治系统(AS)的新兴监管格局是由这种系统值得信赖的概念所支撑的。个人和团体需要什么来确定一个值得信任的系统,因此吸引了一系列学科的研究,尽管重要的问题仍然存在。其中包括如何以对个人历史和背景敏感的方式确保可信赖性,以及新兴的监管框架是否以及如何能够充分确保as的可信赖性。本文报告了四个焦点小组的社会法律分析,其中包括公众和专业人士,探讨问责制是否有助于在卫生、金融和公共部门发展值得信赖的AS。研究发现,在某些情况下,回答性是有益的,为了找到值得信赖的答案,个人通常需要关于未来行动和组织价值观如何嵌入系统的答案。它还揭示了迫切需要关注的问题,以便对此类系统进行有意义的监管,包括在部署as的地方,公众和专业人士认为的“危害”之间的不协调,以及英国监管机构的期望明显缺乏明确性。本文讨论了这些发现对英国和欧盟发展中但迅速建立的监管格局的影响。
{"title":"Exploring expert and public perceptions of answerability and trustworthy autonomous systems","authors":"Louise Hatherall,&nbsp;Nayha Sethi","doi":"10.1016/j.jrt.2025.100106","DOIUrl":"10.1016/j.jrt.2025.100106","url":null,"abstract":"<div><div>The emerging regulatory landscape addressing autonomous systems (AS) is underpinned by the notion that such systems be trustworthy. What individuals and groups need to determine a system as worthy of trust has consequently attracted research from a range of disciplines, although important questions remain. These include how to ensure trustworthiness in a way that is sensitive to individual histories and contexts, as well as if, and how, emerging regulatory frameworks can adequately secure the trustworthiness of AS. This article reports the socio-legal analysis of four focus groups with publics and professionals exploring whether answerability can help develop trustworthy AS in health, finance, and the public sector. It finds that answerability is beneficial in some contexts, and that to find AS trustworthy, individuals often need answers about future actions and how organisational values are embedded within a system. It also reveals pressing issues demanding attention for meaningful regulation of such systems, including dissonances between what publics and professionals identify as ‘harm’ where AS are deployed, and a significant lack of clarity about the expectations of regulatory bodies in the UK. The article discusses the implications of these findings for the developing but rapidly setting regulatory landscape in the UK and EU.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pub Date : 2025-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"22 ","pages":"Article 100099"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147143555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1