首页 > 最新文献

AI and ethics最新文献

英文 中文
AI in human resources: efficiency, ethics, and emerging challenges 人力资源中的人工智能:效率、伦理和新出现的挑战
Pub Date : 2025-12-11 DOI: 10.1007/s43681-025-00862-x
Soumi Majumder, Syedahmed Salman, Nilanjan Dey

This study aims to explore the integration of artificial intelligence (AI) in human resources (HR) in depth, focusing on its impact on organizational productivity, ethical considerations, and the emerging challenges that come with widespread adoption through an extensive literature review. AI-driven HR solutions significantly increase efficiency in hiring, employee engagement, and workforce analytics by automating repetitive tasks and enabling data-driven decision-making. However, the study revealed persistent ethical concerns, particularly regarding algorithmic bias, transparency, and data privacy. Additionally, challenges such as inadequate skills among HR personnel, resistance to change, and ambiguous rules regarding AI deployment were recognized as major barriers. The findings highlight the necessity for organizations to connect efficiency enhancements with solid ethical standards and transparent AI governance. HR leaders are encouraged to emphasize skill development, foster a culture of responsible AI utilization, and engage with evolving legal standards. Addressing these factors is crucial for realizing the full potential of AI in HR while mitigating risks to equity, trust, and employee well-being. Employing AI tools in HR procedures without transparent communication can reduce employee trust. Company leaders should prioritize transparency in their AI applications, particularly when it is utilized to monitor employees or impact decisions that directly concern them. Consequently, to emphasize the significance of artificial intelligence in the current business landscape, we aim to explore the challenges and opportunities of implementing AI in human resource management, along with its ethical implications.

本研究旨在通过广泛的文献综述,深入探讨人工智能(AI)在人力资源(HR)中的整合,重点关注其对组织生产力的影响、伦理考虑以及广泛采用人工智能所带来的新挑战。人工智能驱动的人力资源解决方案通过自动化重复任务和实现数据驱动的决策,显著提高了招聘、员工敬业度和劳动力分析的效率。然而,该研究揭示了持续存在的伦理问题,特别是在算法偏见、透明度和数据隐私方面。此外,人力资源人员技能不足、对变革的抵制以及关于人工智能部署的模糊规则等挑战被认为是主要障碍。调查结果强调了组织将效率提高与坚实的道德标准和透明的人工智能治理联系起来的必要性。鼓励人力资源领导者强调技能发展,培养负责任的人工智能使用文化,并参与不断发展的法律标准。解决这些因素对于充分发挥人工智能在人力资源领域的潜力,同时减轻公平、信任和员工福利方面的风险至关重要。在人力资源流程中使用人工智能工具而没有透明的沟通会降低员工的信任。公司领导者应该优先考虑人工智能应用的透明度,特别是当它被用来监控员工或影响直接关系到他们的决策时。因此,为了强调人工智能在当前商业环境中的重要性,我们的目标是探索在人力资源管理中实施人工智能的挑战和机遇,以及它的伦理含义。
{"title":"AI in human resources: efficiency, ethics, and emerging challenges","authors":"Soumi Majumder,&nbsp;Syedahmed Salman,&nbsp;Nilanjan Dey","doi":"10.1007/s43681-025-00862-x","DOIUrl":"10.1007/s43681-025-00862-x","url":null,"abstract":"<div><p>This study aims to explore the integration of artificial intelligence (AI) in human resources (HR) in depth, focusing on its impact on organizational productivity, ethical considerations, and the emerging challenges that come with widespread adoption through an extensive literature review. AI-driven HR solutions significantly increase efficiency in hiring, employee engagement, and workforce analytics by automating repetitive tasks and enabling data-driven decision-making. However, the study revealed persistent ethical concerns, particularly regarding algorithmic bias, transparency, and data privacy. Additionally, challenges such as inadequate skills among HR personnel, resistance to change, and ambiguous rules regarding AI deployment were recognized as major barriers. The findings highlight the necessity for organizations to connect efficiency enhancements with solid ethical standards and transparent AI governance. HR leaders are encouraged to emphasize skill development, foster a culture of responsible AI utilization, and engage with evolving legal standards. Addressing these factors is crucial for realizing the full potential of AI in HR while mitigating risks to equity, trust, and employee well-being. Employing AI tools in HR procedures without transparent communication can reduce employee trust. Company leaders should prioritize transparency in their AI applications, particularly when it is utilized to monitor employees or impact decisions that directly concern them. Consequently, to emphasize the significance of artificial intelligence in the current business landscape, we aim to explore the challenges and opportunities of implementing AI in human resource management, along with its ethical implications.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond good and evil?: understanding the role of human values in modern reinforcement learning 超越善与恶?:理解人类价值观在现代强化学习中的作用
Pub Date : 2025-12-11 DOI: 10.1007/s43681-025-00857-8
Theodore McCullough

This paper asks the question of whether there are performance reasons for leveraging Human Values in the pre-training of Modern Reinforcement Learning (RL) models? As part of exploring this question, this paper looks at Modern RL algorithms generally, and specifically Model Based and Model Free RL. It then examines the treatment of Rewards in such model types as the Solution Concepts of: Common Reward Games, Zero Sum Games, and General Sum Games (incl. Nash Equilibriums). The Value Alignment Problem is then described as a result of a Zero Sum Game Solution Concepts and actual examples of this problem are provided. The paper then goes on to explore how Ruchard Sutton and Stewart Russell propose to address the Value Alignment Problem. Finally, the paper examines the possible use of Supervised Learning to effectively pre-train Modern RL algorithms to address the Value Alignment Problem, and cites the success of the AlphaStar algorithm as an example how pre-training with Human Values may have technical benefits.

本文提出的问题是,在现代强化学习(RL)模型的预训练中,是否存在利用Human Values的性能原因?作为探索这个问题的一部分,本文一般地研究了现代强化学习算法,特别是基于模型和无模型的强化学习。然后,它检查了这些模型类型中的奖励处理,如:普通奖励游戏,零和游戏和一般和游戏(包括纳什均衡)的解决概念。然后将价值一致性问题描述为零和博弈解决方案的结果,并提供了该问题的概念和实际示例。本文接着探讨了richard Sutton和Stewart Russell是如何提出解决价值一致性问题的。最后,本文探讨了使用监督学习有效地预训练现代强化学习算法来解决价值对齐问题的可能性,并引用了AlphaStar算法的成功作为一个例子,说明人类价值观的预训练如何具有技术优势。
{"title":"Beyond good and evil?: understanding the role of human values in modern reinforcement learning","authors":"Theodore McCullough","doi":"10.1007/s43681-025-00857-8","DOIUrl":"10.1007/s43681-025-00857-8","url":null,"abstract":"<div><p>This paper asks the question of whether there are performance reasons for leveraging Human Values in the pre-training of Modern Reinforcement Learning (RL) models? As part of exploring this question, this paper looks at Modern RL algorithms generally, and specifically Model Based and Model Free RL. It then examines the treatment of Rewards in such model types as the Solution Concepts of: Common Reward Games, Zero Sum Games, and General Sum Games (incl. Nash Equilibriums). The Value Alignment Problem is then described as a result of a Zero Sum Game Solution Concepts and actual examples of this problem are provided. The paper then goes on to explore how Ruchard Sutton and Stewart Russell propose to address the Value Alignment Problem. Finally, the paper examines the possible use of Supervised Learning to effectively pre-train Modern RL algorithms to address the Value Alignment Problem, and cites the success of the AlphaStar algorithm as an example how pre-training with Human Values may have technical benefits.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing intersectional bias in AI recruitment using HITHIRE model: a fair, ethical, green AI and transparent hiring solution for Saudi Arabia’s diverse workforce in line with vision 2030 使用HITHIRE模型解决人工智能招聘中的交叉偏见:为沙特阿拉伯的多元化劳动力提供公平、道德、绿色和透明的招聘解决方案,符合2030年愿景
Pub Date : 2025-12-11 DOI: 10.1007/s43681-025-00844-z
Elham Albaroudi, Taha Mansouri, Mohammad Hatamleh, Ali Alameer

Artificial Intelligence (AI) is transforming recruitment by allowing organisations to employ data-driven hiring decisions. However, AI-powered tools are prone to reinforcing biases instead of eliminating them, posing ethical and fairness concerns. Since AI-powered tools are critical in hiring, this study introduces HITHIRE, an AI-driven recruitment model designed to enhance transparency, fairness, and inclusivity within the diverse workforce of Saudi Arabia, which aligns with Vision 2030. A baseline model was first evaluated using Llama 3.1, BERT, and regular NPL techniques. However, the baseline model revealed significant biases in gender and nationality-based hiring, making it inappropriate for the Saudi Arabian diverse hiring environment. The Llama 3.1 model was enhanced through data augmentation, sentence transformers, standard scoring, and transparency mechanisms, resulting in the HITHIRE model. Fairness analysis demonstrated improvements across gender and nationality dimensions, with reduced Statistical Parity Difference (SPD) and Disparate Impact (DI) scores. The findings highlight the potential of ethical AI integration in recruitment, ensuring unbiased, accountable, and transparent hiring practices. HITHIRE sets a precedent for AI-driven fairness in recruitment, contributing to HR policies and ethical AI discourse globally.

人工智能(AI)正在改变招聘方式,允许组织采用数据驱动的招聘决策。然而,人工智能工具容易强化偏见,而不是消除偏见,引发道德和公平问题。由于人工智能驱动的工具在招聘中至关重要,本研究介绍了HITHIRE,这是一种人工智能驱动的招聘模式,旨在提高沙特阿拉伯多元化劳动力的透明度、公平性和包容性,与2030年愿景保持一致。基线模型首先使用Llama 3.1、BERT和常规NPL技术进行评估。然而,基线模型显示了基于性别和国籍的招聘方面的显著偏见,使其不适合沙特阿拉伯多样化的招聘环境。通过数据增强、句子转换、标准评分和透明度机制对Llama 3.1模型进行了增强,从而形成了HITHIRE模型。公平分析表明,性别和国籍维度均有所改善,统计均等差异(SPD)和差异影响(DI)得分均有所降低。研究结果强调了在招聘中融入道德人工智能的潜力,确保公正、负责和透明的招聘实践。HITHIRE为人工智能驱动的招聘公平树立了先例,为全球人力资源政策和人工智能道德话语做出了贡献。
{"title":"Addressing intersectional bias in AI recruitment using HITHIRE model: a fair, ethical, green AI and transparent hiring solution for Saudi Arabia’s diverse workforce in line with vision 2030","authors":"Elham Albaroudi,&nbsp;Taha Mansouri,&nbsp;Mohammad Hatamleh,&nbsp;Ali Alameer","doi":"10.1007/s43681-025-00844-z","DOIUrl":"10.1007/s43681-025-00844-z","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is transforming recruitment by allowing organisations to employ data-driven hiring decisions. However, AI-powered tools are prone to reinforcing biases instead of eliminating them, posing ethical and fairness concerns. Since AI-powered tools are critical in hiring, this study introduces HITHIRE, an AI-driven recruitment model designed to enhance transparency, fairness, and inclusivity within the diverse workforce of Saudi Arabia, which aligns with Vision 2030. A baseline model was first evaluated using Llama 3.1, BERT, and regular NPL techniques. However, the baseline model revealed significant biases in gender and nationality-based hiring, making it inappropriate for the Saudi Arabian diverse hiring environment. The Llama 3.1 model was enhanced through data augmentation, sentence transformers, standard scoring, and transparency mechanisms, resulting in the HITHIRE model. Fairness analysis demonstrated improvements across gender and nationality dimensions, with reduced Statistical Parity Difference (SPD) and Disparate Impact (DI) scores. The findings highlight the potential of ethical AI integration in recruitment, ensuring unbiased, accountable, and transparent hiring practices. HITHIRE sets a precedent for AI-driven fairness in recruitment, contributing to HR policies and ethical AI discourse globally.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00844-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-powered LCNC implementations and gender: a comparative study of role attribution bias 人工智能驱动的LCNC实施与性别:角色归因偏见的比较研究
Pub Date : 2025-12-10 DOI: 10.1007/s43681-025-00843-0
Spyridon Tsoukalas, Dialekti Athina Voutyrakou, Marios Karelis, Constantine Skordoulis, Gianna Katsiampoura, Markos Avlonitis, Patrick Mikalef

This study investigates whether AI-powered Low-Code/No-Code (LCNC) solutions may unintentionally generate gender-biased responses. We developed four AI-powered LCNC implementations (i.e., Spreadsheet-based, Workflow-based, Web-Application-based and Mobile-Application-based), using different generative AI models, including those from OpenAI, DeepSeek, Claude, and Google DeepMind, and evaluated their outputs in response to prompts designed to highlight potential gendered associations in roles, traits, and personal preferences. Our analysis consists of two parts. First, we applied a mixed-methods structured content analysis to systematically identify potential stereotypical patterns in the responses of the AI models. Second, we compared the outputs across the different AI models for each prompt to explore variations in gender bias-related behavior. Our findings raise an ethical concern: without appropriate policies and guidelines in place, AI-powered LCNC solutions may replicate or even amplify existing societal biases. This work contributes to ongoing discussions on responsible AI integration and bias-aware design, especially within the evolving LCNC ecosystem.

本研究调查了人工智能驱动的低代码/无代码(LCNC)解决方案是否会无意中产生性别偏见的反应。我们开发了四种基于人工智能的LCNC实现(即基于电子表格、基于工作流、基于web应用程序和基于移动应用程序),使用不同的生成式人工智能模型,包括来自OpenAI、DeepSeek、Claude和谷歌DeepMind的模型,并根据提示评估其输出,这些提示旨在突出角色、特征和个人偏好中的潜在性别关联。我们的分析由两部分组成。首先,我们采用混合方法进行结构化内容分析,系统地识别人工智能模型响应中的潜在刻板模式。其次,我们比较了每个提示的不同人工智能模型的输出,以探索性别偏见相关行为的变化。我们的研究结果引发了一个伦理问题:如果没有适当的政策和指导方针,人工智能驱动的LCNC解决方案可能会复制甚至放大现有的社会偏见。这项工作有助于正在进行的关于负责任的人工智能集成和偏见感知设计的讨论,特别是在不断发展的LCNC生态系统中。
{"title":"AI-powered LCNC implementations and gender: a comparative study of role attribution bias","authors":"Spyridon Tsoukalas,&nbsp;Dialekti Athina Voutyrakou,&nbsp;Marios Karelis,&nbsp;Constantine Skordoulis,&nbsp;Gianna Katsiampoura,&nbsp;Markos Avlonitis,&nbsp;Patrick Mikalef","doi":"10.1007/s43681-025-00843-0","DOIUrl":"10.1007/s43681-025-00843-0","url":null,"abstract":"<div><p>This study investigates whether AI-powered Low-Code/No-Code (LCNC) solutions may unintentionally generate gender-biased responses. We developed four AI-powered LCNC implementations (i.e., Spreadsheet-based, Workflow-based, Web-Application-based and Mobile-Application-based), using different generative AI models, including those from OpenAI, DeepSeek, Claude, and Google DeepMind, and evaluated their outputs in response to prompts designed to highlight potential gendered associations in roles, traits, and personal preferences. Our analysis consists of two parts. First, we applied a mixed-methods structured content analysis to systematically identify potential stereotypical patterns in the responses of the AI models. Second, we compared the outputs across the different AI models for each prompt to explore variations in gender bias-related behavior. Our findings raise an ethical concern: without appropriate policies and guidelines in place, AI-powered LCNC solutions may replicate or even amplify existing societal biases. This work contributes to ongoing discussions on responsible AI integration and bias-aware design, especially within the evolving LCNC ecosystem.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00843-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical evaluation of ChatGPT-3.5 and ChatGPT-4.0 for autism-related queries: validity, completeness, and consistency ChatGPT-3.5和ChatGPT-4.0对自闭症相关查询的实证评估:有效性、完整性和一致性
Pub Date : 2025-12-10 DOI: 10.1007/s43681-025-00890-7
Ali Naderi Malek, Patricia Prelock, Atefeh Jannesari, Fatemeh Mehrpour

This study evaluates the reliability, completeness, and thematic consistency of responses generated by legacy versions of ChatGPT—3.5 (2023) and 4.0 (2024)—across five autism-related domains: Diagnosis, Prognosis, Prevalence, Evaluation, and Treatment. The study provides a historical snapshot of model performance while highlighting broader patterns of accuracy, limitations, and implications for ongoing use of large language models in autism-related contexts. Sixty-nine questions across five domains were presented to both ChatGPT-3.5 and ChatGPT-4.0. The responses were evaluated for accuracy, length of explanation, completeness, and thematic consistency. Comparative analyses incorporated both descriptive and inferential statistics, and thematic patterns were examined using cosine similarity. Both ChatGPT-3.5 and ChatGPT-4.0 exhibited high accuracy, particularly in structured domains such as Diagnosis and Treatment. ChatGPT-4.0 provided slightly richer descriptive detail in more complex areas, though this occasionally introduced thematic variability. Completeness scores were moderate across domains (ranging from ~ 0.20 to 0.50), reflecting that responses often captured some, but not all expected key points. A limited consistency checks with ChatGPT-5.0 (September 2025) demonstrated broad stability of conclusions, with no decreases in accuracy and only minor updates observed in prevalence-related answers. ChatGPT-3.5 and ChatGPT-4.0 show substantial potential as supplementary resources for autism-related information. Although these models consistently provide accurate content, their moderate completeness underscores the need for human oversight to ensure quality and comprehensiveness. Understanding their reliability, limitations, and evolving nature is essential as large language models continue to be explored for healthcare education and patient-facing information, though not as replacements for professional training or clinical care.

本研究评估了ChatGPT-3.5(2023)和4.0(2024)遗留版本产生的响应的可靠性、完整性和主题一致性,涉及五个自闭症相关领域:诊断、预后、患病率、评估和治疗。该研究提供了模型性能的历史快照,同时强调了在自闭症相关环境中持续使用大型语言模型的准确性、局限性和含义的更广泛模式。五个领域的69个问题被提交给ChatGPT-3.5和ChatGPT-4.0。对回答的准确性、解释的长度、完整性和主题一致性进行评估。比较分析结合了描述性和推断性统计,并使用余弦相似度检查主题模式。ChatGPT-3.5和ChatGPT-4.0都表现出很高的准确性,特别是在结构化领域,如诊断和治疗。ChatGPT-4.0在更复杂的领域提供了更丰富的描述性细节,尽管这偶尔会引入主题变化。完整性得分在各个领域中是中等的(范围从~ 0.20到0.50),反映了响应通常捕获了一些,但不是所有预期的关键点。使用ChatGPT-5.0(2025年9月)进行的有限一致性检查显示结论具有广泛的稳定性,准确性没有降低,仅观察到与患病率相关的答案有轻微的更新。ChatGPT-3.5和ChatGPT-4.0显示出作为自闭症相关信息补充资源的巨大潜力。尽管这些模型始终如一地提供准确的内容,但它们适度的完整性强调了需要人为监督以确保质量和全面性。随着大型语言模型继续用于医疗保健教育和面向患者的信息的探索,尽管不能替代专业培训或临床护理,但了解它们的可靠性、局限性和不断发展的性质是至关重要的。
{"title":"An empirical evaluation of ChatGPT-3.5 and ChatGPT-4.0 for autism-related queries: validity, completeness, and consistency","authors":"Ali Naderi Malek,&nbsp;Patricia Prelock,&nbsp;Atefeh Jannesari,&nbsp;Fatemeh Mehrpour","doi":"10.1007/s43681-025-00890-7","DOIUrl":"10.1007/s43681-025-00890-7","url":null,"abstract":"<div><p>This study evaluates the reliability, completeness, and thematic consistency of responses generated by legacy versions of ChatGPT—3.5 (2023) and 4.0 (2024)—across five autism-related domains: Diagnosis, Prognosis, Prevalence, Evaluation, and Treatment. The study provides a historical snapshot of model performance while highlighting broader patterns of accuracy, limitations, and implications for ongoing use of large language models in autism-related contexts. Sixty-nine questions across five domains were presented to both ChatGPT-3.5 and ChatGPT-4.0. The responses were evaluated for accuracy, length of explanation, completeness, and thematic consistency. Comparative analyses incorporated both descriptive and inferential statistics, and thematic patterns were examined using cosine similarity. Both ChatGPT-3.5 and ChatGPT-4.0 exhibited high accuracy, particularly in structured domains such as Diagnosis and Treatment. ChatGPT-4.0 provided slightly richer descriptive detail in more complex areas, though this occasionally introduced thematic variability. Completeness scores were moderate across domains (ranging from ~ 0.20 to 0.50), reflecting that responses often captured some, but not all expected key points. A limited consistency checks with ChatGPT-5.0 (September 2025) demonstrated broad stability of conclusions, with no decreases in accuracy and only minor updates observed in prevalence-related answers. ChatGPT-3.5 and ChatGPT-4.0 show substantial potential as supplementary resources for autism-related information. Although these models consistently provide accurate content, their moderate completeness underscores the need for human oversight to ensure quality and comprehensiveness. Understanding their reliability, limitations, and evolving nature is essential as large language models continue to be explored for healthcare education and patient-facing information, though not as replacements for professional training or clinical care.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system 超选择性可解释性:临床决策支持系统中可解释性效用的实证案例研究
Pub Date : 2025-12-10 DOI: 10.1007/s43681-025-00837-y
Shaul A. Duke, Peter Sandøe, Thomas Bøker Lund, Elisabetta Maria Abenavoli, Thomas Beyer, Daria Ferrara, Armin Frille, Stefan Gruenert, Osama Sabri, Roberto Sciagrà, Miriam Pepponi, Hesse Swen, Anke Tönjes, Hubert Wirtz, Josef Yu, Lalith Kumar Shiyam Sundar, Sune Holm

Explainability is a leading solution offered to address the challenge of AI’s black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool’s ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems.

可解释性是解决人工智能黑箱挑战的主要解决方案。然而,当尝试应用可解释性时,可能会出现很多问题,而且它的成功还远未确定。此外,关于具体可解释性努力的有效性的经验数据不足。我们研究了正在开发的人工智能决策支持工具的可解释性场景,该工具用于早期检测癌症相关恶病质(一种潜在致命的代谢综合征)。我们对处理恶病质的临床医生进行了13次访谈,询问他们之前使用人工智能工具的经验,以及他们对可解释性的看法,并基于Shapley加性解释(SHAP)方法提出了可解释性场景。我们采访的大多数临床医生之前使用人工智能工具的经验有限,他们中的大多数人认为,这种人工智能系统对恶病质的早期检测的可解释性至关重要。当提出SHAP可解释性方案时,他们对促成该工具裁决的特征的熟悉程度有限,只有少数临床医生(核医学专家)表示他们可以以有意义的方式利用这些特征。矛盾的是,与患者接触的临床医生不能使用这种特殊的SHAP解释。这项研究强调了在临床环境中提供高选择性可解释性工具的挑战。它还显示了开发可解释设计的人工智能系统的挑战。
{"title":"Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system","authors":"Shaul A. Duke,&nbsp;Peter Sandøe,&nbsp;Thomas Bøker Lund,&nbsp;Elisabetta Maria Abenavoli,&nbsp;Thomas Beyer,&nbsp;Daria Ferrara,&nbsp;Armin Frille,&nbsp;Stefan Gruenert,&nbsp;Osama Sabri,&nbsp;Roberto Sciagrà,&nbsp;Miriam Pepponi,&nbsp;Hesse Swen,&nbsp;Anke Tönjes,&nbsp;Hubert Wirtz,&nbsp;Josef Yu,&nbsp;Lalith Kumar Shiyam Sundar,&nbsp;Sune Holm","doi":"10.1007/s43681-025-00837-y","DOIUrl":"10.1007/s43681-025-00837-y","url":null,"abstract":"<div>\u0000 \u0000 <p>Explainability is a leading solution offered to address the challenge of AI’s black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool’s ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00837-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The anatomy of AI policies: a systematic comparative analysis of AI policies across the globe 人工智能政策剖析:全球人工智能政策的系统比较分析
Pub Date : 2025-12-10 DOI: 10.1007/s43681-025-00886-3
Amna Batool, Sunny Lee, Yue Liu, Liming Dong

The rapid expansion of artificial intelligence (AI) across sectors brings significant benefits but also substantial risks, such as bias, discrimination, and lack of transparency. Mitigating these risks requires AI governance frameworks that ensure ethical and responsible use. While existing studies highlight strategies and ethical guidelines, comparative analyses of emerging responsible AI (RAI) frameworks, standards, and regulations remain limited. This study aims to fill this gap by employing a rapid review methodology to examine 17 responsible AI frameworks, standards, and regulations which we named as AI policies throughout this research, from diverse regions, including Singapore, the US, the UK, Canada, Hiroshima, and Australia, and global organizations including the Organization for Economic Co-operation and Development (OECD), and International Organization for Standardization (ISO). This research aimed to address four primary questions on identifying global and local AI policies, identifying and analyzing their key features, assessing implementation challenges, and determining the essential components for designing an integrated AI governance framework. There are eleven key features identified, including RAI Principles, Stakeholders, Stages (AI software development life cycle), Targeted audiences, Scalability, Enforce-ability, Resource Intensive, Region, Technology, AI governance practices (Prerequisites, outcomes, implementation tools or guides), and AI governance area. The comparative analysis highlighted that while the AI policies offer detailed implementation guidelines, they differ in their approaches, mandatory nature, scalability, and resource demands. These differences are critical for organizations seeking to implement these policies effectively. Challenges related to resource intensity, scalability, governance practices, and ambiguous targeted audiences were noted as significant barriers to successful adoption. Based on the analysis, key components for an RAI framework were proposed, and categorized into qualities (scalable, extensible, adaptive, efficient), dimensions (scope, context, implementation practices), and governance practices (prerequisites/outcomes, resources, governance steps). These components aim to guide organizations in developing AI governance frameworks.

人工智能(AI)在各行业的迅速扩张带来了巨大的好处,但也带来了巨大的风险,如偏见、歧视和缺乏透明度。减轻这些风险需要确保道德和负责任使用的人工智能治理框架。虽然现有的研究强调战略和道德准则,但对新兴负责任的人工智能(RAI)框架、标准和法规的比较分析仍然有限。本研究旨在通过采用快速审查方法来检查17个负责任的人工智能框架、标准和法规来填补这一空白,我们在整个研究过程中将这些框架、标准和法规称为人工智能政策,这些框架、标准和法规来自不同地区,包括新加坡、美国、英国、加拿大、广岛和澳大利亚,以及包括经济合作与发展组织(OECD)和国际标准化组织(ISO)在内的全球组织。本研究旨在解决四个主要问题,即确定全球和地方人工智能政策,识别和分析其关键特征,评估实施挑战,以及确定设计集成人工智能治理框架的基本组成部分。确定了11个关键特征,包括AI原则、利益相关者、阶段(人工智能软件开发生命周期)、目标受众、可扩展性、可执行性、资源密集型、区域、技术、人工智能治理实践(先决条件、结果、实施工具或指南)和人工智能治理领域。对比分析强调,虽然人工智能政策提供了详细的实施指南,但它们在方法、强制性、可扩展性和资源需求方面有所不同。这些差异对于寻求有效实施这些政策的组织来说是至关重要的。与资源强度、可伸缩性、治理实践和不明确的目标受众相关的挑战被认为是成功采用的重大障碍。基于分析,提出了RAI框架的关键组件,并将其分类为质量(可伸缩的、可扩展的、自适应的、高效的)、维度(范围、上下文、实现实践)和治理实践(先决条件/结果、资源、治理步骤)。这些组件旨在指导组织开发人工智能治理框架。
{"title":"The anatomy of AI policies: a systematic comparative analysis of AI policies across the globe","authors":"Amna Batool,&nbsp;Sunny Lee,&nbsp;Yue Liu,&nbsp;Liming Dong","doi":"10.1007/s43681-025-00886-3","DOIUrl":"10.1007/s43681-025-00886-3","url":null,"abstract":"<div><p>The rapid expansion of artificial intelligence (AI) across sectors brings significant benefits but also substantial risks, such as bias, discrimination, and lack of transparency. Mitigating these risks requires AI governance frameworks that ensure ethical and responsible use. While existing studies highlight strategies and ethical guidelines, comparative analyses of emerging responsible AI (RAI) frameworks, standards, and regulations remain limited. This study aims to fill this gap by employing a rapid review methodology to examine 17 responsible AI frameworks, standards, and regulations which we named as AI policies throughout this research, from diverse regions, including Singapore, the US, the UK, Canada, Hiroshima, and Australia, and global organizations including the Organization for Economic Co-operation and Development (OECD), and International Organization for Standardization (ISO). This research aimed to address four primary questions on identifying global and local AI policies, identifying and analyzing their key features, assessing implementation challenges, and determining the essential components for designing an integrated AI governance framework. There are eleven key features identified, including RAI Principles, Stakeholders, Stages (AI software development life cycle), Targeted audiences, Scalability, Enforce-ability, Resource Intensive, Region, Technology, AI governance practices (Prerequisites, outcomes, implementation tools or guides), and AI governance area. The comparative analysis highlighted that while the AI policies offer detailed implementation guidelines, they differ in their approaches, mandatory nature, scalability, and resource demands. These differences are critical for organizations seeking to implement these policies effectively. Challenges related to resource intensity, scalability, governance practices, and ambiguous targeted audiences were noted as significant barriers to successful adoption. Based on the analysis, key components for an RAI framework were proposed, and categorized into qualities (scalable, extensible, adaptive, efficient), dimensions (scope, context, implementation practices), and governance practices (prerequisites/outcomes, resources, governance steps). These components aim to guide organizations in developing AI governance frameworks.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00886-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural selection of minds: how AI elevates idea-centric research in academia 思维的自然选择:人工智能如何提升学术界以思想为中心的研究
Pub Date : 2025-12-09 DOI: 10.1007/s43681-025-00922-2
Amir Hafezikhah

As artificial intelligence becomes increasingly embedded in academic life, it is beginning to reshape the cognitive, ethical, and institutional foundations of scholarly practice. This paper argues that large language models and automated research tools do more than enhance productivity, fundamentally altering how ideas are generated, valued, and legitimized. We conceptualize this transformation as a shift from the “natural selection of skills” to a “natural selection of minds,” where academic success increasingly depends on the ability to frame generative questions, synthesize across disciplines, and ethically navigate human–machine collaboration. Through interdisciplinary case studies and critical analysis, we highlight both the promises and the risks of this epistemic transition, emphasizing the need for reflection and potential reevaluation of academic standards to align with the realities of hybrid intelligence and cognitive inequality in the AI era. While our analysis captures dynamics most visible in fields shaped by publish-or-perish pressures and formalist conventions, it may not generalize equally across all disciplines.

随着人工智能越来越多地融入学术生活,它开始重塑学术实践的认知、伦理和制度基础。本文认为,大型语言模型和自动化研究工具不仅仅提高了生产力,还从根本上改变了想法的产生、价值和合法性。我们将这种转变定义为从“技能的自然选择”到“思想的自然选择”的转变,在这种转变中,学术上的成功越来越依赖于构建生成性问题的能力,跨学科的综合能力,以及合乎道德地驾驭人机协作的能力。通过跨学科的案例研究和批判性分析,我们强调了这种认知转变的承诺和风险,强调需要对学术标准进行反思和潜在的重新评估,以适应人工智能时代混合智能和认知不平等的现实。虽然我们的分析抓住了在由“要么出版,要么灭亡”的压力和形式主义惯例形成的领域中最明显的动态,但它可能不会在所有学科中平等地推广。
{"title":"Natural selection of minds: how AI elevates idea-centric research in academia","authors":"Amir Hafezikhah","doi":"10.1007/s43681-025-00922-2","DOIUrl":"10.1007/s43681-025-00922-2","url":null,"abstract":"<div><p>As artificial intelligence becomes increasingly embedded in academic life, it is beginning to reshape the cognitive, ethical, and institutional foundations of scholarly practice. This paper argues that large language models and automated research tools do more than enhance productivity, fundamentally altering how ideas are generated, valued, and legitimized. We conceptualize this transformation as a shift from the “natural selection of skills” to a “natural selection of minds,” where academic success increasingly depends on the ability to frame generative questions, synthesize across disciplines, and ethically navigate human–machine collaboration. Through interdisciplinary case studies and critical analysis, we highlight both the promises and the risks of this epistemic transition, emphasizing the need for reflection and potential reevaluation of academic standards to align with the realities of hybrid intelligence and cognitive inequality in the AI era. While our analysis captures dynamics most visible in fields shaped by publish-or-perish pressures and formalist conventions, it may not generalize equally across all disciplines.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The meta-layered framework: a diagnostic approach to ethical pluralism in human–AI systems—paradox, pluralism, and the moral architecture of adaptive intelligence 元层次框架:人类-人工智能系统中伦理多元性的诊断方法——悖论、多元性和适应性智能的道德架构
Pub Date : 2025-12-09 DOI: 10.1007/s43681-025-00880-9
Giovanni Velotto

AI systems increasingly mediate high-stakes decisions and expose value conflicts that fixed principles or design-time value lists cannot settle. This article proposes the Meta-Layered Framework (MLF): a diagnostic, governance-first architecture that renders pluralism and persistent contradiction procedurally governable The MLF distinguishes six non-substitutable layers—structural, epistemic, relational, political, ontological, and reflexive—and coordinates action through a simple grammar: type → route → review. The accompanying Ethical Tension Map operationalises this grammar by linking each live dispute to layer-proper evidence, a competent forum with standing and remedy (including pause/rollback), and a fixed/triggered review cadence. A comparative analysis clarifies how this approach diverges from Value-Sensitive Design, Explainable AI, and process ethics, which surface concerns and improve legibility but under-specify standing, authority, and cadence. An illustrative healthcare case (MedIntra) shows how the framework converts bedside reasoning, disparity control, and sacred-role boundaries into auditable routines. The method is intended for AI designers, governance bodies, and policy-makers, and presupposes minimal assurance conditions: separation of build and adjudication, named authorities with halt powers, and proportional documentation. Its value is procedural: decisions are made provisionally, monitored against thresholds, and altered or unwound when evidence demands.

人工智能系统越来越多地调解高风险决策,暴露出固定原则或设计时价值列表无法解决的价值冲突。本文提出了元层框架(Meta-Layered Framework, MLF):一种诊断性的、治理优先的架构,使多元主义和持续的矛盾在程序上可治理。MLF区分了六个不可替代的层——结构层、认识论层、关系层、政治层、本体论层和反思性层,并通过一个简单的语法:类型→路线→审查来协调行动。随附的道德张力图通过将每个现场争议链接到适当的证据,一个具有常设和补救措施(包括暂停/回滚)的主管论坛,以及固定/触发的审查节奏,来操作该语法。一项比较分析阐明了这种方法是如何与价值敏感设计、可解释的人工智能和过程伦理相区别的,后者表面关注并提高可读性,但理解指定的立场、权威和节奏。一个说明性的医疗保健案例(MedIntra)展示了该框架如何将床边推理、差异控制和神圣角色边界转换为可审计的例程。该方法适用于人工智能设计师、治理机构和政策制定者,并预设了最低限度的保证条件:建立和裁决的分离、具有半权力的指定机构和比例文档。它的价值是程序性的:决定是临时做出的,根据阈值进行监测,并在证据要求时改变或撤销。
{"title":"The meta-layered framework: a diagnostic approach to ethical pluralism in human–AI systems—paradox, pluralism, and the moral architecture of adaptive intelligence","authors":"Giovanni Velotto","doi":"10.1007/s43681-025-00880-9","DOIUrl":"10.1007/s43681-025-00880-9","url":null,"abstract":"<div><p>AI systems increasingly mediate high-stakes decisions and expose value conflicts that fixed principles or design-time value lists cannot settle. This article proposes the Meta-Layered Framework (MLF): a diagnostic, governance-first architecture that renders pluralism and persistent contradiction procedurally governable The MLF distinguishes six non-substitutable layers—structural, epistemic, relational, political, ontological, and reflexive—and coordinates action through a simple grammar: type → route → review. The accompanying Ethical Tension Map operationalises this grammar by linking each live dispute to layer-proper evidence, a competent forum with standing and remedy (including pause/rollback), and a fixed/triggered review cadence. A comparative analysis clarifies how this approach diverges from Value-Sensitive Design, Explainable AI, and process ethics, which surface concerns and improve legibility but under-specify standing, authority, and cadence. An illustrative healthcare case (MedIntra) shows how the framework converts bedside reasoning, disparity control, and sacred-role boundaries into auditable routines. The method is intended for AI designers, governance bodies, and policy-makers, and presupposes minimal assurance conditions: separation of build and adjudication, named authorities with halt powers, and proportional documentation. Its value is procedural: decisions are made provisionally, monitored against thresholds, and altered or unwound when evidence demands.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing AI ethics in practice and the question of ’who?’ A framework and a multiple case study 在实践中实施人工智能伦理和“谁?”一个框架和多个案例研究
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00870-x
Kai-Kristian Kemell, Patrik Floréen, Mikko Raatikainen, Jukka K. Nurminen

While interest towards AI ethics grows, it remains challenging for companies to implement in practice. Some existing studies have sought to understand the current state of practice in AI ethics and the challenges companies face while trying to take ethics into account. However, there is still work to be done in this regard. Specifically, we consider the viewpoint of roles in the practical implementation to be underexplored. In other words, "who" should be doing AI ethics? We conduct an exploratory multiple case study to understand how three companies tackle AI ethics, especially in terms of roles. We conduct semi-structured interviews with respondents working in both management and developer roles. The resulting interview data is analyzed using thematic analysis. Our results showcase different ways in which AI ethics can be approached as far as roles are considered. Our case companies employ different approaches to AI ethics and, in addition, also discuss likely future scenarios for their organizations, as well as how they perceive things should be ideally done. In addition, we present a framework that highlights four general ways to approach AI ethics in terms of roles and responsibilities. This study furthers our understanding of how AI ethics is approached in practice in terms of organizational roles and processes, serving as an initial exploration on the topic. The framework we propose provides a tangible way to start considering roles related to AI ethics in both research and practice.

尽管人们对人工智能伦理的兴趣日益浓厚,但对企业来说,在实践中实施它仍然具有挑战性。一些现有的研究试图了解人工智能伦理的实践现状,以及公司在试图考虑伦理时面临的挑战。然而,在这方面仍有工作要做。具体来说,我们认为在实际实施中角色的观点尚未得到充分的探讨。换句话说,“谁”应该做人工智能伦理?我们进行了一个探索性的多案例研究,以了解三家公司如何处理人工智能伦理,特别是在角色方面。我们对从事管理和开发工作的受访者进行了半结构化访谈。访谈所得数据采用主题分析法进行分析。我们的研究结果展示了人工智能伦理可以通过不同的方式来处理,只要考虑到角色。我们的案例公司采用不同的人工智能伦理方法,此外,还讨论了他们组织未来可能出现的情况,以及他们认为事情应该如何理想地完成。此外,我们提出了一个框架,强调了在角色和责任方面处理人工智能伦理的四种一般方法。这项研究进一步加深了我们对人工智能伦理在实践中如何在组织角色和流程方面得到处理的理解,作为对该主题的初步探索。我们提出的框架提供了一种切实可行的方式,可以开始考虑在研究和实践中与人工智能伦理相关的角色。
{"title":"Implementing AI ethics in practice and the question of ’who?’ A framework and a multiple case study","authors":"Kai-Kristian Kemell,&nbsp;Patrik Floréen,&nbsp;Mikko Raatikainen,&nbsp;Jukka K. Nurminen","doi":"10.1007/s43681-025-00870-x","DOIUrl":"10.1007/s43681-025-00870-x","url":null,"abstract":"<div><p>While interest towards AI ethics grows, it remains challenging for companies to implement in practice. Some existing studies have sought to understand the current state of practice in AI ethics and the challenges companies face while trying to take ethics into account. However, there is still work to be done in this regard. Specifically, we consider the viewpoint of roles in the practical implementation to be underexplored. In other words, \"who\" should be doing AI ethics? We conduct an exploratory multiple case study to understand how three companies tackle AI ethics, especially in terms of roles. We conduct semi-structured interviews with respondents working in both management and developer roles. The resulting interview data is analyzed using thematic analysis. Our results showcase different ways in which AI ethics can be approached as far as roles are considered. Our case companies employ different approaches to AI ethics and, in addition, also discuss likely future scenarios for their organizations, as well as how they perceive things should be ideally done. In addition, we present a framework that highlights four general ways to approach AI ethics in terms of roles and responsibilities. This study furthers our understanding of how AI ethics is approached in practice in terms of organizational roles and processes, serving as an initial exploration on the topic. The framework we propose provides a tangible way to start considering roles related to AI ethics in both research and practice.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00870-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1