首页 > 最新文献

AI and ethics最新文献

英文 中文
The ethical imperative of algorithmic fairness in AI-enabled hiring: a critical analysis of bias, accountability, and justice 人工智能招聘中算法公平的道德要求:对偏见、问责制和正义的批判性分析
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00927-x
Jason Law

Artificial intelligence is increasingly used to screen job applicants, yet empirical studies show that algorithmic hiring systems can reproduce racial, gender, and intersectional disparities. Existing mitigation strategies frequently treat these disparities as technical problems—addressed through data adjustments, model tuning, or compliance metrics—while insufficiently engaging with the broader ethical implications of discriminatory outcomes. This paper examines algorithmic bias in hiring through established ethical concepts of justice, capability development, and recognition to explain why technical and regulatory approaches alone prove inadequate. The analysis synthesizes empirical evidence of algorithmic discrimination with normative frameworks to identify structural limitations in prevailing fairness interventions. Findings reveal that while technical methods can reduce measurable disparities, they often fail to address deeper concerns: restricted access to meaningful opportunities, the devaluation of marginalized groups, and violations of procedural fairness. Regulatory frameworks similarly prioritize compliance over substantive commitments, contributing to “bias washing” where organizations meet formal requirements without achieving genuine equity. To address these limitations, the paper proposes an “ethical by design” governance model that integrates normative principles with technical implementation, guiding organizations toward hiring systems that promote equitable opportunity distribution and uphold candidate dignity.

人工智能越来越多地用于筛选求职者,但实证研究表明,算法招聘系统可能会再现种族、性别和交叉差异。现有的缓解战略经常将这些差异视为技术问题——通过数据调整、模型调整或合规性指标来解决——而没有充分考虑歧视性结果的更广泛的伦理影响。本文通过公正、能力发展和认可等既定的伦理概念来考察招聘中的算法偏见,以解释为什么仅靠技术和监管方法是不够的。该分析综合了算法歧视的经验证据与规范框架,以确定现行公平干预的结构性限制。调查结果表明,虽然技术方法可以减少可衡量的差距,但它们往往无法解决更深层次的问题:获得有意义的机会受到限制、边缘化群体贬值以及违反程序公平。监管框架同样将遵从性优先于实质性承诺,在组织满足正式要求而没有实现真正公平的情况下,导致“偏见清洗”。为了解决这些限制,本文提出了一种“设计道德”治理模型,该模型将规范原则与技术实施相结合,指导组织建立促进公平机会分配和维护候选人尊严的招聘系统。
{"title":"The ethical imperative of algorithmic fairness in AI-enabled hiring: a critical analysis of bias, accountability, and justice","authors":"Jason Law","doi":"10.1007/s43681-025-00927-x","DOIUrl":"10.1007/s43681-025-00927-x","url":null,"abstract":"<div><p>Artificial intelligence is increasingly used to screen job applicants, yet empirical studies show that algorithmic hiring systems can reproduce racial, gender, and intersectional disparities. Existing mitigation strategies frequently treat these disparities as technical problems—addressed through data adjustments, model tuning, or compliance metrics—while insufficiently engaging with the broader ethical implications of discriminatory outcomes. This paper examines algorithmic bias in hiring through established ethical concepts of justice, capability development, and recognition to explain why technical and regulatory approaches alone prove inadequate. The analysis synthesizes empirical evidence of algorithmic discrimination with normative frameworks to identify structural limitations in prevailing fairness interventions. Findings reveal that while technical methods can reduce measurable disparities, they often fail to address deeper concerns: restricted access to meaningful opportunities, the devaluation of marginalized groups, and violations of procedural fairness. Regulatory frameworks similarly prioritize compliance over substantive commitments, contributing to “bias washing” where organizations meet formal requirements without achieving genuine equity. To address these limitations, the paper proposes an “ethical by design” governance model that integrates normative principles with technical implementation, guiding organizations toward hiring systems that promote equitable opportunity distribution and uphold candidate dignity.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence and synthetic biology: biosecurity risks, dual-use concerns, and governance pathways 人工智能和合成生物学:生物安全风险、双重用途问题和治理途径
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00872-9
Kirolos Eskandar

Artificial intelligence (AI) is accelerating discovery timelines in synthetic biology, expanding opportunities for therapeutic breakthroughs, sustainable bio-manufacturing, and rapid response to health and environmental challenges. At the same time, this acceleration shifts biosecurity risks from physical materials toward a broader socio-technical landscape involving models, datasets, and distributed automation. This literature review synthesizes evidence from 119 peer-reviewed articles published between January 2015 and August 2025, focusing on biosecurity risks, dual-use concerns, and governance responses at the AI–synthetic-biology interface. Findings indicate that AI systems consistently increase design throughput and lower expertise barriers, enabling faster medical and industrial innovation but also heightening risks of repurposing for harmful molecules or genetic sequences. Existing governance remains fragmented: biosafety regimes emphasize physical agents and laboratories, while AI governance frameworks focus on privacy and fairness—leaving critical blind spots for biological misuse scenarios. Mitigation measures identified in the literature converge on layered controls, including risk-tiered access to high-capability models, systematic red-teaming prior to release, strengthened DNA-synthesis screening (including short fragments), audit logging, secure data infrastructures, and international capacity-building. Evaluation gaps persist, and harmonized metrics—such as synthesis-screening coverage, red-team testing frequency, accredited biofoundries, and early-warning lead times—are recommended for systematic monitoring of governance effectiveness. Addressing these challenges is essential to ensure that AI-enabled synthetic biology advances responsibly, balancing its transformative potential for health and sustainability with global biosecurity.

人工智能(AI)正在加速合成生物学的发现时间表,扩大治疗突破、可持续生物制造以及快速应对健康和环境挑战的机会。与此同时,这种加速将生物安全风险从物理材料转移到涉及模型、数据集和分布式自动化的更广泛的社会技术领域。本文献综述综合了2015年1月至2025年8月期间发表的119篇同行评议文章的证据,重点关注人工智能合成生物学界面的生物安全风险、双重用途问题和治理响应。研究结果表明,人工智能系统不断提高设计吞吐量,降低专业知识壁垒,实现更快的医疗和工业创新,但也增加了重新利用有害分子或基因序列的风险。现有的治理仍然是碎片化的:生物安全制度强调物理代理人和实验室,而人工智能治理框架则侧重于隐私和公平——为生物滥用情景留下了关键的盲点。文献中确定的缓解措施集中于分层控制,包括对高能力模型的风险分层访问、发布前系统的红队、加强dna合成筛选(包括短片段)、审计记录、安全的数据基础设施和国际能力建设。评估差距仍然存在,建议采用统一的指标——如合成筛选覆盖率、红队检测频率、认可的生物代工厂和预警提前期——对治理有效性进行系统监测。应对这些挑战对于确保人工智能合成生物学负责任地取得进展,平衡其对健康和可持续性的变革潜力与全球生物安全至关重要。
{"title":"Artificial intelligence and synthetic biology: biosecurity risks, dual-use concerns, and governance pathways","authors":"Kirolos Eskandar","doi":"10.1007/s43681-025-00872-9","DOIUrl":"10.1007/s43681-025-00872-9","url":null,"abstract":"<div><p>Artificial intelligence (AI) is accelerating discovery timelines in synthetic biology, expanding opportunities for therapeutic breakthroughs, sustainable bio-manufacturing, and rapid response to health and environmental challenges. At the same time, this acceleration shifts biosecurity risks from physical materials toward a broader socio-technical landscape involving models, datasets, and distributed automation. This literature review synthesizes evidence from 119 peer-reviewed articles published between January 2015 and August 2025, focusing on biosecurity risks, dual-use concerns, and governance responses at the AI–synthetic-biology interface. Findings indicate that AI systems consistently increase design throughput and lower expertise barriers, enabling faster medical and industrial innovation but also heightening risks of repurposing for harmful molecules or genetic sequences. Existing governance remains fragmented: biosafety regimes emphasize physical agents and laboratories, while AI governance frameworks focus on privacy and fairness—leaving critical blind spots for biological misuse scenarios. Mitigation measures identified in the literature converge on layered controls, including risk-tiered access to high-capability models, systematic red-teaming prior to release, strengthened DNA-synthesis screening (including short fragments), audit logging, secure data infrastructures, and international capacity-building. Evaluation gaps persist, and harmonized metrics—such as synthesis-screening coverage, red-team testing frequency, accredited biofoundries, and early-warning lead times—are recommended for systematic monitoring of governance effectiveness. Addressing these challenges is essential to ensure that AI-enabled synthetic biology advances responsibly, balancing its transformative potential for health and sustainability with global biosecurity.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frankenstein 2.0: now IT works. How to build an intelligent machine with human dignity 弗兰肯斯坦2.0:现在它工作了。如何制造具有人类尊严的智能机器
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00894-3
Christian Thielscher

Introduction

The rapid advancement of artificial intelligence raises profound technical, ethical and philosophical questions: Can machines attain human-like dignity, and if so, how do we decide that? This article presents “Immanuel”, a blueprint for an intelligent machine designed to meet the criteria of human dignity as defined in international human rights frameworks.

Methods

Immanuel was developed by designing technical representations of human traits as described in typical textbooks about physiology, psychiatry, and psychology. In order to prove that Immanuel does indeed have human dignity, the individual components of dignity (as described earlier) were checked individually.

Results

Immanuel’s design integrates perception, self-modeling (an “I” module), and ethical valuation. All features are implementable with current technology and enable human-like moral feelings, learning, empathy, and autonomous decision-making. Systematic comparison confirms that Immanuel fulfills all criteria for human dignity.

Discussion

As of today, it is possible to build machines with human dignity. Therefore, the discussion about ethical implications must be held urgently.

人工智能的快速发展引发了深刻的技术、伦理和哲学问题:机器能否获得像人类一样的尊严,如果可以,我们如何决定?本文介绍了“以马内利”,这是一种智能机器的蓝图,旨在满足国际人权框架中定义的人类尊严标准。方法simmanuel是根据典型的生理学、精神病学和心理学教科书中描述的人类特征设计技术表征来开发的。为了证明以马内利确实具有人的尊严,尊严的各个组成部分(如前所述)被单独检查。结果simmanuel的设计整合了感知、自我建模(一个“I”模块)和道德评估。所有功能都可以用当前的技术实现,并实现类似人类的道德感受、学习、同理心和自主决策。系统比较证实,以马内利符合人类尊严的所有标准。就今天而言,制造具有人类尊严的机器是可能的。因此,关于伦理影响的讨论必须紧急举行。
{"title":"Frankenstein 2.0: now IT works. How to build an intelligent machine with human dignity","authors":"Christian Thielscher","doi":"10.1007/s43681-025-00894-3","DOIUrl":"10.1007/s43681-025-00894-3","url":null,"abstract":"<div><h3>Introduction</h3><p>The rapid advancement of artificial intelligence raises profound technical, ethical and philosophical questions: Can machines attain human-like dignity, and if so, how do we decide that? This article presents “Immanuel”, a blueprint for an intelligent machine designed to meet the criteria of human dignity as defined in international human rights frameworks.</p><h3>Methods</h3><p>Immanuel was developed by designing technical representations of human traits as described in typical textbooks about physiology, psychiatry, and psychology. In order to prove that Immanuel does indeed have human dignity, the individual components of dignity (as described earlier) were checked individually.</p><h3>Results</h3><p>Immanuel’s design integrates perception, self-modeling (an “I” module), and ethical valuation. All features are implementable with current technology and enable human-like moral feelings, learning, empathy, and autonomous decision-making. Systematic comparison confirms that Immanuel fulfills all criteria for human dignity.</p><h3>Discussion</h3><p>As of today, it is possible to build machines with human dignity. Therefore, the discussion about ethical implications must be held urgently.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00894-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redefining student assessment in AI-infused learning environments: a systematic review of challenges and strategies for academic integrity 在人工智能注入的学习环境中重新定义学生评估:对学术诚信挑战和策略的系统回顾
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00871-w
Prince D N Ncube, Godwin P Dzvapatsva, Courage Matobobo, Memory M Ranga

Integrating Artificial Intelligence (AI) tools, particularly generative AI (GenAI), in higher education is reshaping assessment practices, presenting both challenges and opportunities. While these tools enhance learning, they also raise concerns about academic integrity and the authenticity of student work. Traditional assessments, such as essays and take-home assignments, are increasingly susceptible to AI-assisted plagiarism, necessitating a re-evaluation of assessment strategies. This systematic review, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework, examines educators' challenges in assessing student learning in AI-infused environments. Using Scopus, IEEE Xplore, and ScienceDirect, we identified relevant literature highlighting concerns about originality, critical thinking evaluation, and the quality of student work. Findings underscore the need for AI-resistant, process-based assessments, such as oral exams and multi-stage evaluations, to uphold academic integrity. The study advocates for institutional AI policies and digital literacy programs to promote ethical AI use and mitigate academic misconduct. Additionally, it emphasises a balanced human-AI collaboration in assessments, ensuring that AI enhances rather than replaces student effort. Addressing these challenges can reduce academic misconduct cases, allowing educators to focus on fostering meaningful learning experiences and sustainable educational outcomes.

在高等教育中整合人工智能(AI)工具,特别是生成式人工智能(GenAI),正在重塑评估实践,带来挑战和机遇。虽然这些工具提高了学习效果,但它们也引发了对学术诚信和学生作业真实性的担忧。传统的评估,如论文和家庭作业,越来越容易受到人工智能辅助抄袭的影响,因此有必要重新评估评估策略。本系统评估以系统评估和荟萃分析(PRISMA)框架的首选报告项目为指导,研究了教育工作者在评估人工智能环境中学生学习时面临的挑战。使用Scopus、IEEE explore和ScienceDirect,我们确定了强调独创性、批判性思维评估和学生作业质量的相关文献。研究结果强调,为了维护学术诚信,需要进行抗人工智能、基于过程的评估,如口试和多阶段评估。该研究倡导制定机构人工智能政策和数字扫盲计划,以促进人工智能的道德使用,减轻学术不端行为。此外,它强调在评估中平衡人与人工智能的合作,确保人工智能提高而不是取代学生的努力。解决这些挑战可以减少学术不端行为,使教育工作者能够专注于培养有意义的学习经验和可持续的教育成果。
{"title":"Redefining student assessment in AI-infused learning environments: a systematic review of challenges and strategies for academic integrity","authors":"Prince D N Ncube,&nbsp;Godwin P Dzvapatsva,&nbsp;Courage Matobobo,&nbsp;Memory M Ranga","doi":"10.1007/s43681-025-00871-w","DOIUrl":"10.1007/s43681-025-00871-w","url":null,"abstract":"<div><p>Integrating Artificial Intelligence (AI) tools, particularly generative AI (GenAI), in higher education is reshaping assessment practices, presenting both challenges and opportunities. While these tools enhance learning, they also raise concerns about academic integrity and the authenticity of student work. Traditional assessments, such as essays and take-home assignments, are increasingly susceptible to AI-assisted plagiarism, necessitating a re-evaluation of assessment strategies. This systematic review, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework, examines educators' challenges in assessing student learning in AI-infused environments. Using Scopus, IEEE Xplore, and ScienceDirect, we identified relevant literature highlighting concerns about originality, critical thinking evaluation, and the quality of student work. Findings underscore the need for AI-resistant, process-based assessments, such as oral exams and multi-stage evaluations, to uphold academic integrity. The study advocates for institutional AI policies and digital literacy programs to promote ethical AI use and mitigate academic misconduct. Additionally, it emphasises a balanced human-AI collaboration in assessments, ensuring that AI enhances rather than replaces student effort. Addressing these challenges can reduce academic misconduct cases, allowing educators to focus on fostering meaningful learning experiences and sustainable educational outcomes.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00871-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational AI in customer support: Revolution or risk? 客户支持中的对话式人工智能:革命还是风险?
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00893-4
Bilawal Sarwar, Joudat Irshad Hashmi

This study explores the transformative impact of conversational artificial intelligence (AI) on customer support systems across various industries. Through a systematic qualitative literature review of 15 empirical studies and the analysis of 10 contemporary abstracts, the research identifies key operational, experiential, and ethical dimensions associated with AI-driven service delivery. Findings reveal that conversational AI significantly enhances efficiency, scalability, and customer accessibility, especially during crisis periods like the COVID-19 pandemic. However, challenges such as limited emotional intelligence, ethical concerns over data privacy and bias, and lack of transparency persist. The review highlights that customer satisfaction and trust depend heavily on the system's communication quality, anthropomorphic design, and ability to escalate complex issues to human agents. Hybrid models, combining AI automation with human oversight, consistently produce the most favorable outcomes. Sector-specific variations underscore the importance of context-aware AI deployment strategies. The study emphasizes the urgent need for ethical frameworks, regulatory standards, and user-centered design in the development and implementation of conversational AI technologies. Future research should focus on longitudinal, real-world evaluations to address gaps in user experience, fairness, and technological accountability. This paper contributes to the understanding of how AI can be sustainably integrated into customer service, aligning innovation with human-centric values.

本研究探讨了会话式人工智能(AI)对各行业客户支持系统的变革性影响。通过对15项实证研究的系统定性文献综述和对10项当代摘要的分析,该研究确定了与人工智能驱动的服务交付相关的关键运营、经验和道德维度。研究结果表明,对话式人工智能显著提高了效率、可扩展性和客户可访问性,特别是在COVID-19大流行等危机时期。然而,诸如有限的情商、对数据隐私和偏见的伦理担忧以及缺乏透明度等挑战仍然存在。回顾强调,客户满意度和信任在很大程度上取决于系统的沟通质量、拟人化设计以及将复杂问题升级到人工代理的能力。混合模型,结合人工智能自动化和人类监督,始终产生最有利的结果。特定行业的变化强调了上下文感知人工智能部署策略的重要性。该研究强调,在开发和实施对话式人工智能技术时,迫切需要道德框架、监管标准和以用户为中心的设计。未来的研究应该集中在纵向的、真实世界的评估上,以解决用户体验、公平性和技术问责制方面的差距。本文有助于理解如何将人工智能可持续地整合到客户服务中,使创新与以人为本的价值观保持一致。
{"title":"Conversational AI in customer support: Revolution or risk?","authors":"Bilawal Sarwar,&nbsp;Joudat Irshad Hashmi","doi":"10.1007/s43681-025-00893-4","DOIUrl":"10.1007/s43681-025-00893-4","url":null,"abstract":"<div><p>This study explores the transformative impact of conversational artificial intelligence (AI) on customer support systems across various industries. Through a systematic qualitative literature review of 15 empirical studies and the analysis of 10 contemporary abstracts, the research identifies key operational, experiential, and ethical dimensions associated with AI-driven service delivery. Findings reveal that conversational AI significantly enhances efficiency, scalability, and customer accessibility, especially during crisis periods like the COVID-19 pandemic. However, challenges such as limited emotional intelligence, ethical concerns over data privacy and bias, and lack of transparency persist. The review highlights that customer satisfaction and trust depend heavily on the system's communication quality, anthropomorphic design, and ability to escalate complex issues to human agents. Hybrid models, combining AI automation with human oversight, consistently produce the most favorable outcomes. Sector-specific variations underscore the importance of context-aware AI deployment strategies. The study emphasizes the urgent need for ethical frameworks, regulatory standards, and user-centered design in the development and implementation of conversational AI technologies. Future research should focus on longitudinal, real-world evaluations to address gaps in user experience, fairness, and technological accountability. This paper contributes to the understanding of how AI can be sustainably integrated into customer service, aligning innovation with human-centric values.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical principles for artificial intelligence in education: a meta-review approach 教育中人工智能的伦理原则:一种元回顾方法
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00878-3
Mihiri Wickramasinghe, Lasith Gunawardena, Amitha Padukkage

The integration of Artificial Intelligence (AI) in education has caused increasing concerns over principles of ethics, including transparency, fairness, privacy, and accountability. Although several literature reviews have investigated these issues separately, there is a lack of consistency among research, leading to fragmented terminology and conceptual overlap. The present study aims to perform a meta-review of current literature studies to identify, categorize, and consolidate the ethical considerations associated with AI usage in education. This study systematically analyzed 13 peer-reviewed literature review articles published from 2021 to 2025, adhering to the PRISMA 2020 framework. Articles were chosen according to rigorous inclusion criteria, highlighting reviews that examined ethical issues in AI applications within various educational contexts. A qualitative thematic synthesis was employed to extract and categorize ethical concepts into broad themes. The research revealed seven key ethical themes: Transparency and Accountability, Data Protection and Security, Fairness and Non-Discrimination, Human-Centric AI design, Ethical Responsibility and Governance, Moral Principles in AI, and Technical Integrity and Robustness. These findings provide a unified thematic framework that encompasses both fundamental ethics and contemporary challenges, including those presented by generative AI. The study identifies gaps in the literature by providing a systematic ethical framework and suggests future research directions that include empirical validation, cross-cultural investigations, and an examination of generative AI risks. This review theoretically aids in establishing a cohesive ethical foundation to guide future models and governance frameworks for responsible AI in education.

人工智能(AI)与教育的融合,引发了人们对透明、公平、隐私、问责制等伦理原则的担忧。虽然有几篇文献综述分别研究了这些问题,但研究之间缺乏一致性,导致术语碎片化和概念重叠。本研究旨在对当前文献研究进行荟萃回顾,以识别、分类和巩固与人工智能在教育中使用相关的伦理考虑。本研究遵循PRISMA 2020框架,系统分析了2021 - 2025年间发表的13篇同行评议文献综述文章。文章是根据严格的纳入标准选择的,重点是审查了各种教育背景下人工智能应用中的伦理问题。采用定性主题综合来提取伦理概念并将其分类为广泛的主题。该研究揭示了七个关键的伦理主题:透明度和问责制、数据保护和安全、公平和非歧视、以人为本的人工智能设计、道德责任和治理、人工智能的道德原则、技术完整性和稳健性。这些发现提供了一个统一的主题框架,涵盖了基本伦理和当代挑战,包括生成式人工智能带来的挑战。该研究通过提供系统的伦理框架确定了文献中的空白,并提出了未来的研究方向,包括经验验证、跨文化调查和对生成人工智能风险的检查。这一综述在理论上有助于建立一个有凝聚力的伦理基础,以指导未来负责任的教育人工智能模型和治理框架。
{"title":"Ethical principles for artificial intelligence in education: a meta-review approach","authors":"Mihiri Wickramasinghe,&nbsp;Lasith Gunawardena,&nbsp;Amitha Padukkage","doi":"10.1007/s43681-025-00878-3","DOIUrl":"10.1007/s43681-025-00878-3","url":null,"abstract":"<div>\u0000 \u0000 <p>The integration of Artificial Intelligence (AI) in education has caused increasing concerns over principles of ethics, including transparency, fairness, privacy, and accountability. Although several literature reviews have investigated these issues separately, there is a lack of consistency among research, leading to fragmented terminology and conceptual overlap. The present study aims to perform a meta-review of current literature studies to identify, categorize, and consolidate the ethical considerations associated with AI usage in education. This study systematically analyzed 13 peer-reviewed literature review articles published from 2021 to 2025, adhering to the PRISMA 2020 framework. Articles were chosen according to rigorous inclusion criteria, highlighting reviews that examined ethical issues in AI applications within various educational contexts. A qualitative thematic synthesis was employed to extract and categorize ethical concepts into broad themes. The research revealed seven key ethical themes: Transparency and Accountability, Data Protection and Security, Fairness and Non-Discrimination, Human-Centric AI design, Ethical Responsibility and Governance, Moral Principles in AI, and Technical Integrity and Robustness. These findings provide a unified thematic framework that encompasses both fundamental ethics and contemporary challenges, including those presented by generative AI. The study identifies gaps in the literature by providing a systematic ethical framework and suggests future research directions that include empirical validation, cross-cultural investigations, and an examination of generative AI risks. This review theoretically aids in establishing a cohesive ethical foundation to guide future models and governance frameworks for responsible AI in education.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ten reasons why—the case for more and better AI regulation 10个理由——更多更好的人工智能监管
Pub Date : 2025-12-15 DOI: 10.1007/s43681-025-00865-8
Manuel Woersdoerfer

In recent months, there has been a noticeable shift toward AI deregulation and soft law, particularly in the U.S.—exemplified by Trump’s revocation of Biden’s executive order—and, to a lesser extent, in the E.U., as seen in the adoption of a voluntary code of conduct for providers of general-purpose AI. In contrast to these political trends, this paper argues that the significant risks posed by AI technologies call for stronger and more effective governance mechanisms. It presents ten key reasons why robust AI regulation is essential: the impact of AI on labor markets and social equity (i.e., the global/digital divide); (agentic) AI biases and discrimination; infringements on privacy; the amplification of harmful online content, including dangerous and hate speech and disinformation; business and human rights concerns; environmental consequences; the growing politico-economic power of big tech companies; threats to democracy and the rule of law; and the military implications of emerging AI technologies. Finally, the paper explores potential pathways for future AI regulation (soft vs. hard law and constitutional AI vs. AI guardrails) and proposes strategies to enhance existing regulatory frameworks, such as the E.U.’s AI Act and competition policy.

近几个月来,人工智能的放松管制和软法律出现了明显的转变,尤其是在美国,比如特朗普撤销拜登的行政命令。在较小程度上,欧盟也出现了这种转变,比如通用人工智能提供商自愿采用了一项行为准则。与这些政治趋势相反,本文认为人工智能技术带来的重大风险需要更强大、更有效的治理机制。报告提出了强有力的人工智能监管至关重要的十个关键原因:人工智能对劳动力市场和社会公平的影响(即全球/数字鸿沟);(代理)人工智能偏见和歧视;侵犯隐私;放大有害的网络内容,包括危险和仇恨言论和虚假信息;商业和人权问题;环境影响;大型科技公司日益增长的政治经济实力;对民主和法治的威胁;以及新兴人工智能技术的军事意义。最后,本文探讨了未来人工智能监管的潜在途径(软法律与硬法律以及宪法人工智能与人工智能护栏),并提出了加强现有监管框架的策略,如欧盟的人工智能法案和竞争政策。
{"title":"Ten reasons why—the case for more and better AI regulation","authors":"Manuel Woersdoerfer","doi":"10.1007/s43681-025-00865-8","DOIUrl":"10.1007/s43681-025-00865-8","url":null,"abstract":"<div><p>In recent months, there has been a noticeable shift toward AI deregulation and soft law, particularly in the U.S.—exemplified by Trump’s revocation of Biden’s executive order—and, to a lesser extent, in the E.U., as seen in the adoption of a voluntary code of conduct for providers of general-purpose AI. In contrast to these political trends, this paper argues that the significant risks posed by AI technologies call for stronger and more effective governance mechanisms. It presents ten key reasons why robust AI regulation is essential: the impact of AI on labor markets and social equity (i.e., the global/digital divide); (agentic) AI biases and discrimination; infringements on privacy; the amplification of harmful online content, including dangerous and hate speech and disinformation; business and human rights concerns; environmental consequences; the growing politico-economic power of big tech companies; threats to democracy and the rule of law; and the military implications of emerging AI technologies. Finally, the paper explores potential pathways for future AI regulation (soft vs. hard law and constitutional AI vs. AI guardrails) and proposes strategies to enhance existing regulatory frameworks, such as the E.U.’s AI Act and competition policy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Superintelligent AI and meaning in life 超级人工智能和生命的意义
Pub Date : 2025-12-14 DOI: 10.1007/s43681-025-00861-y
Adriana Placani

This paper shows that superintelligent AI (ASI) poses a significant risk to meaning in human life by relying on Susan Wolf’s conception, according to which meaningful lives are lives of active engagement in projects of worth. The paper argues that ASI makes it less likely for humans to lead meaningful lives by reducing the possibility of human contribution and active engagement in what are deemed to be some of the most worthwhile projects of human life. The paper also criticizes Nick Bostrom’s and John Danaher’s views on how meaning may be retained in spite of this.

本文表明,依靠苏珊·沃尔夫(Susan Wolf)的概念,超级智能人工智能(ASI)对人类生活的意义构成了重大风险,根据该概念,有意义的生活是积极参与有价值项目的生活。论文认为,ASI通过减少人类贡献和积极参与被认为是人类生命中最有价值的一些项目的可能性,使人类不太可能过上有意义的生活。本文还批评了尼克·博斯特罗姆和约翰·丹纳赫关于如何在这种情况下保留意义的观点。
{"title":"Superintelligent AI and meaning in life","authors":"Adriana Placani","doi":"10.1007/s43681-025-00861-y","DOIUrl":"10.1007/s43681-025-00861-y","url":null,"abstract":"<div><p>This paper shows that superintelligent AI (ASI) poses a significant risk to meaning in human life by relying on Susan Wolf’s conception, according to which meaningful lives are lives of active engagement in projects of worth. The paper argues that ASI makes it less likely for humans to lead meaningful lives by reducing the possibility of human contribution and active engagement in what are deemed to be some of the most worthwhile projects of human life. The paper also criticizes Nick Bostrom’s and John Danaher’s views on how meaning may be retained in spite of this.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A capability-sensitive framework for assessing ethical assistance and harm in AI systems 评估人工智能系统中的道德援助和危害的能力敏感框架
Pub Date : 2025-12-12 DOI: 10.1007/s43681-025-00931-1
Sankarshan Saptasomabuddha

This paper introduces the Capability-Sensitive Framework (CSF), a formal method for auditing AI systems through the lens of capability approach. CSF specifies two normative guardrails: a capability floor, which ensures no individual is pushed below thresholds for essential freedoms, and a life-plan ceiling, which guarantees that people retain viable paths toward their meaningful goals. These constraints are operationalized via two metrics, the Capability-Coverage Ratio (CCR) and Life-Plan Alignment Score (LAS), evaluated at both individual and subgroup levels. A typology of positive and negative modalities situates benefits and harms in capability terms, distinguishing ethical outcomes from systemic risks and clarifying category boundaries. CSF foregrounds human flourishing, ethical sufficiency and human-centric values, enabling actionable, model-agnostic guidance for high-stakes sociotechnical decisions. This work bridges normative political philosophy with formal auditing practices for AI systems, offering a rigorous foundation for capability-aware AI governance.

本文介绍了能力敏感框架(CSF),这是一种通过能力方法对人工智能系统进行审计的形式化方法。CSF规定了两道规范的护栏:一是能力底线,确保没有人被推到基本自由的门槛以下;二是人生计划上限,确保人们保持通往有意义目标的可行道路。这些约束是通过两个度量来实现的,能力覆盖率(CCR)和生命计划一致性评分(LAS),在个人和亚组水平上进行评估。积极和消极模式的类型学在能力方面定位了利益和危害,将伦理结果与系统风险区分开来,并澄清了类别边界。CSF强调人类繁荣、道德充分性和以人为中心的价值观,为高风险的社会技术决策提供了可操作的、与模型无关的指导。这项工作将规范的政治哲学与人工智能系统的正式审计实践联系起来,为能力感知的人工智能治理提供了严格的基础。
{"title":"A capability-sensitive framework for assessing ethical assistance and harm in AI systems","authors":"Sankarshan Saptasomabuddha","doi":"10.1007/s43681-025-00931-1","DOIUrl":"10.1007/s43681-025-00931-1","url":null,"abstract":"<div><p>This paper introduces the Capability-Sensitive Framework (CSF), a formal method for auditing AI systems through the lens of capability approach. CSF specifies two normative guardrails: a capability floor, which ensures no individual is pushed below thresholds for essential freedoms, and a life-plan ceiling, which guarantees that people retain viable paths toward their meaningful goals. These constraints are operationalized via two metrics, the Capability-Coverage Ratio (CCR) and Life-Plan Alignment Score (LAS), evaluated at both individual and subgroup levels. A typology of positive and negative modalities situates benefits and harms in capability terms, distinguishing ethical outcomes from systemic risks and clarifying category boundaries. CSF foregrounds human flourishing, ethical sufficiency and human-centric values, enabling actionable, model-agnostic guidance for high-stakes sociotechnical decisions. This work bridges normative political philosophy with formal auditing practices for AI systems, offering a rigorous foundation for capability-aware AI governance.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constitutive knowledge sources: an institutional approach to epistemic trust in opaque AI systems 本构知识来源:不透明人工智能系统中认知信任的制度方法
Pub Date : 2025-12-11 DOI: 10.1007/s43681-025-00930-2
Lior Gazit

The opacity of AI systems poses a fundamental epistemic challenge: how can we justifiably trust systems whose decision processes are inscrutable? Rather than relying on inherently limited explainability, I propose an institutional epistemology of warrant. Drawing on speech act theory and social epistemology, I argue that institutional sources that constitute (rather than merely describe) reality can ground epistemic warrant without algorithmic transparency. The framework addresses both retrieval scenarios (existing constitutive sources are cited and verified) and generative scenarios (AI outputs acquire force only after institutional validation). It shifts attention from internal mechanisms to verifiable linkages with authoritative institutional frameworks. Despite important limitations (risks of fabricated constitutive sources, vulnerability to institutional decline, and ethical concerns about algorithmically originated normative content) the account offers a philosophically grounded alternative where comprehensibility is unattainable. The framework is intended for domains where constitutive authority enjoys recognized legitimacy, presupposing rather than creating institutional trust. The framework presupposes existing institutional trust; yet by making institutional warrant more visible and auditable, it also has the potential to reinforce that trust ex post.

人工智能系统的不透明性带来了一个基本的认知挑战:我们如何能够合理地信任那些决策过程不可思议的系统?与其依赖内在有限的可解释性,我提出一种权证的制度认识论。根据言语行为理论和社会认识论,我认为,构成(而不仅仅是描述)现实的制度来源可以在没有算法透明度的情况下建立认识论保证。该框架解决了检索场景(引用和验证现有的本构源)和生成场景(人工智能输出只有在制度验证后才能获得力量)。它将注意力从内部机制转移到与权威体制框架的可核查联系上。尽管存在重要的局限性(捏造的构成来源的风险,易受制度衰落的影响,以及对算法产生的规范性内容的伦理担忧),该账户提供了一个哲学基础的替代方案,其中可理解性是无法实现的。该框架适用于构成权威享有公认合法性的领域,预设而不是创造机构信任。该框架以现有的机构信任为前提;然而,通过使机构权证更加可见和可审计,它也有可能在事后加强这种信任。
{"title":"Constitutive knowledge sources: an institutional approach to epistemic trust in opaque AI systems","authors":"Lior Gazit","doi":"10.1007/s43681-025-00930-2","DOIUrl":"10.1007/s43681-025-00930-2","url":null,"abstract":"<div>\u0000 \u0000 <p>The opacity of AI systems poses a fundamental epistemic challenge: how can we justifiably trust systems whose decision processes are inscrutable? Rather than relying on inherently limited explainability, I propose an institutional epistemology of warrant. Drawing on speech act theory and social epistemology, I argue that institutional sources that constitute (rather than merely describe) reality can ground epistemic warrant without algorithmic transparency. The framework addresses both retrieval scenarios (existing constitutive sources are cited and verified) and generative scenarios (AI outputs acquire force only after institutional validation). It shifts attention from internal mechanisms to verifiable linkages with authoritative institutional frameworks. Despite important limitations (risks of fabricated constitutive sources, vulnerability to institutional decline, and ethical concerns about algorithmically originated normative content) the account offers a philosophically grounded alternative where comprehensibility is unattainable. The framework is intended for domains where constitutive authority enjoys recognized legitimacy, presupposing rather than creating institutional trust. The framework presupposes existing institutional trust; yet by making institutional warrant more visible and auditable, it also has the potential to reinforce that trust <i>ex post</i>.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00930-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1