首页 > 最新文献

AI and ethics最新文献

英文 中文
Persona ex machina: personalist environmental ethics in the age of artificial intelligence 机器替身:人工智能时代的个人主义环境伦理
Pub Date : 2026-01-12 DOI: 10.1007/s43681-025-00974-4
Ivan Efreaim Gozum, Blaise Ringor, Dennis Ian Sy

Artificial intelligence (AI)’s rapid advancement poses opportunities and challenges for environmental ethics. While AI has the potential to enhance ecological sustainability through data-driven solutions, it also risks depersonalizing ethical decision-making and reinforcing a technocratic paradigm that prioritizes efficiency over human dignity and environmental stewardship. This paper explores how Karol Wojtyła’s personalist philosophy provides a sound ethical framework for addressing these concerns. Personalism, which emphasizes the irreducibility of the human person, responsibility, and relationality, offers a foundation for rethinking environmental ethics in the AI era. This study supports a person-centered approach to environmental decision-making by including Wojtyła’s ideas on human agency, the common good, and ecological responsibility. Such an approach resists the reduction of human moral agency to algorithmic processes while fostering solidarity, subsidiarity, and ecological justice. Ultimately, this paper argues that a personalist environmental ethic can guide technological development toward serving humanity and the natural world, ensuring that AI remains a tool for sustainable and ethical progress rather than an autonomous arbiter of ecological fate.

人工智能(AI)的快速发展给环境伦理带来了机遇和挑战。虽然人工智能有可能通过数据驱动的解决方案提高生态可持续性,但它也有可能使道德决策失去人格化,并强化将效率置于人类尊严和环境管理之上的技术官僚范式。本文探讨卡罗尔Wojtyła的个人主义哲学如何为解决这些问题提供一个健全的伦理框架。强调人的不可约性、责任和关系的人格主义,为重新思考人工智能时代的环境伦理提供了基础。本研究支持以人为中心的环境决策方法,包括Wojtyła关于人类代理、共同利益和生态责任的想法。这种方法抵制将人类道德代理减少到算法过程,同时促进团结,辅助性和生态正义。最后,本文认为,个人主义的环境伦理可以引导技术发展为人类和自然世界服务,确保人工智能仍然是可持续和道德进步的工具,而不是生态命运的自主仲裁者。
{"title":"Persona ex machina: personalist environmental ethics in the age of artificial intelligence","authors":"Ivan Efreaim Gozum,&nbsp;Blaise Ringor,&nbsp;Dennis Ian Sy","doi":"10.1007/s43681-025-00974-4","DOIUrl":"10.1007/s43681-025-00974-4","url":null,"abstract":"<div><p>Artificial intelligence (AI)’s rapid advancement poses opportunities and challenges for environmental ethics. While AI has the potential to enhance ecological sustainability through data-driven solutions, it also risks depersonalizing ethical decision-making and reinforcing a technocratic paradigm that prioritizes efficiency over human dignity and environmental stewardship. This paper explores how Karol Wojtyła’s personalist philosophy provides a sound ethical framework for addressing these concerns. Personalism, which emphasizes the irreducibility of the human person, responsibility, and relationality, offers a foundation for rethinking environmental ethics in the AI era. This study supports a person-centered approach to environmental decision-making by including Wojtyła’s ideas on human agency, the common good, and ecological responsibility. Such an approach resists the reduction of human moral agency to algorithmic processes while fostering solidarity, subsidiarity, and ecological justice. Ultimately, this paper argues that a personalist environmental ethic can guide technological development toward serving humanity and the natural world, ensuring that AI remains a tool for sustainable and ethical progress rather than an autonomous arbiter of ecological fate.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward an artifact that designs itself: generative design science research approach 面向自我设计的人工制品:生成式设计科学研究方法
Pub Date : 2026-01-12 DOI: 10.1007/s43681-025-00965-5
Dhruv Verma, Vagan Terziyan, Tuure Tuunanen, Amit K. Shukla

The rapid advancement of artificial intelligence (AI) has introduced profound societal and ethical challenges, necessitating a paradigm shift in AI system design. This paper introduces a novel framework that enables AI systems to design, audit, and evolve themselves ethically, through an adaptation of the echeloned design science research (eDSR) methodology. These AI systems will certainly evolve beyond mere tools to design, refine, and govern themselves within ethical constraints. The framework embeds four core principles: responsible autonomy, where AI systems self-regulate their decisions within ethical boundaries; AI self-explainability, enabling AI-to-AI transparency and internal decision auditing; AI bootstrapping, supporting iterative self-enhancement; and knowledge-informed machine learning (KIML), which integrates domain expertise for context-aware learning. We extend the concept of AI-as-a-User-of-AI, wherein autonomous AI agents behave as collaborative entities that engage in structured dialogues to refine decisions and enforce ethical alignment. Unlike traditional systems that rely on human-in-the-loop oversight or post-hoc explanations, our framework allows AI to monitor and evolve its reasoning in real time. By embedding ethical reasoning, self-explanation, and learning directly into system architecture through modular design echelons, the proposed generative eDSR (GeDSR) framework combines eDSR’s structured and multi-phased approach with AI-to-AI collaboration, which enables scalability, adaptability, and ethical alignment across diverse applications. By embedding ethical reasoning and iterative learning at the architectural level, the proposed framework promotes the development of self-improving AI systems aligned with human values, thus laying the groundwork for a shift from human-dependent oversight to a resilient, AI-centric ecosystem.

人工智能(AI)的快速发展带来了深刻的社会和伦理挑战,需要人工智能系统设计的范式转变。本文介绍了一个新的框架,通过对梯级设计科学研究(eDSR)方法的适应,使人工智能系统能够设计、审计和道德地发展自己。这些人工智能系统肯定会超越仅仅是在道德约束下设计、完善和管理自己的工具。该框架包含四项核心原则:负责任的自治,即人工智能系统在道德界限内自我调节其决策;人工智能的自我解释能力,实现人工智能对人工智能的透明度和内部决策审计;AI自举,支持迭代自我提升;知识告知机器学习(KIML),它集成了上下文感知学习的领域专业知识。我们扩展了人工智能作为人工智能用户的概念,其中自主的人工智能代理作为协作实体,参与结构化对话,以改进决策并强制执行道德一致性。与依赖于人在循环监督或事后解释的传统系统不同,我们的框架允许人工智能实时监控和发展其推理。通过模块化设计阶梯将伦理推理、自我解释和学习直接嵌入到系统架构中,所提出的生成式eDSR (GeDSR)框架将eDSR的结构化和多阶段方法与人工智能到人工智能的协作相结合,从而实现了跨不同应用程序的可扩展性、适应性和伦理一致性。通过在架构层面嵌入伦理推理和迭代学习,所提出的框架促进了与人类价值观相一致的自我完善的人工智能系统的发展,从而为从依赖人类的监督转变为有弹性的、以人工智能为中心的生态系统奠定了基础。
{"title":"Toward an artifact that designs itself: generative design science research approach","authors":"Dhruv Verma,&nbsp;Vagan Terziyan,&nbsp;Tuure Tuunanen,&nbsp;Amit K. Shukla","doi":"10.1007/s43681-025-00965-5","DOIUrl":"10.1007/s43681-025-00965-5","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has introduced profound societal and ethical challenges, necessitating a paradigm shift in AI system design. This paper introduces a novel framework that enables AI systems to design, audit, and evolve themselves ethically, through an adaptation of the echeloned design science research (eDSR) methodology. These AI systems will certainly evolve beyond mere tools to design, refine, and govern themselves within ethical constraints. The framework embeds four core principles: responsible autonomy, where AI systems self-regulate their decisions within ethical boundaries; AI self-explainability, enabling AI-to-AI transparency and internal decision auditing; AI bootstrapping, supporting iterative self-enhancement; and knowledge-informed machine learning (KIML), which integrates domain expertise for context-aware learning. We extend the concept of AI-as-a-User-of-AI, wherein autonomous AI agents behave as collaborative entities that engage in structured dialogues to refine decisions and enforce ethical alignment<b>.</b> Unlike traditional systems that rely on human-in-the-loop oversight or post-hoc explanations, our framework allows AI to monitor and evolve its reasoning in real time. By embedding ethical reasoning, self-explanation, and learning directly into system architecture through modular design echelons, the proposed generative eDSR (GeDSR) framework combines eDSR’s structured and multi-phased approach with AI-to-AI collaboration, which enables scalability, adaptability, and ethical alignment across diverse applications. By embedding ethical reasoning and iterative learning at the architectural level, the proposed framework promotes the development of self-improving AI systems aligned with human values, thus laying the groundwork for a shift from human-dependent oversight to a resilient, AI-centric ecosystem.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00965-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human resource development in the age of artificial intelligence: a theoretical synthesis 人工智能时代的人力资源开发:理论综合
Pub Date : 2026-01-12 DOI: 10.1007/s43681-025-00944-w
Caleb Bennett, Jeremy Bennett

The integration of artificial intelligence (AI) into human resource development (HRD) demands a re-examination of learning theories, developmental practices, and ethical frameworks. This integrative review synthesizes sociotechnical systems theory, adult learning models, augmentation strategies, technology adoption frameworks, and ethics considerations to build a comprehensive conceptual model of AI-enhanced HRD. Key themes include the joint optimization of human and machine capabilities, the personalization and critical interpretation of AI-mediated learning, and the proactive stewardship of fairness, transparency, and inclusion. Drawing upon contemporary studies (2021–2025) and emerging empirical evidence, this paper offers an integrative model that links technical, social, and ethical subsystems within HRD. Cross-cultural dimensions and methodological innovations for studying AI-HRD dynamics are also discussed. The paper offers theoretical contributions to HRD scholarship and practical recommendations for designing adaptive, ethical, and human-centered learning ecosystems in the age of intelligent technologies. Ultimately, the model situates HRD as an active agent in shaping responsible AI futures that enhance, rather than erode, human learning and development.

人工智能(AI)与人力资源开发(HRD)的整合需要重新审视学习理论、发展实践和伦理框架。本综述综合了社会技术系统理论、成人学习模型、增强策略、技术采用框架和伦理考虑,构建了人工智能增强人力资源开发的综合概念模型。关键主题包括人类和机器能力的联合优化,人工智能中介学习的个性化和批判性解释,以及公平、透明和包容的主动管理。根据当代研究(2021-2025)和新兴的经验证据,本文提供了一个综合模型,将人力资源开发中的技术、社会和伦理子系统联系起来。本文还讨论了人工智能人力资源开发动态研究的跨文化维度和方法创新。本文为人力资源开发研究提供了理论贡献,并为在智能技术时代设计适应性、伦理性和以人为中心的学习生态系统提供了实践建议。最终,该模型将人力资源开发定位为塑造负责任的人工智能未来的积极主体,以加强而不是侵蚀人类的学习和发展。
{"title":"Human resource development in the age of artificial intelligence: a theoretical synthesis","authors":"Caleb Bennett,&nbsp;Jeremy Bennett","doi":"10.1007/s43681-025-00944-w","DOIUrl":"10.1007/s43681-025-00944-w","url":null,"abstract":"<div><p>The integration of artificial intelligence (AI) into human resource development (HRD) demands a re-examination of learning theories, developmental practices, and ethical frameworks. This integrative review synthesizes sociotechnical systems theory, adult learning models, augmentation strategies, technology adoption frameworks, and ethics considerations to build a comprehensive conceptual model of AI-enhanced HRD. Key themes include the joint optimization of human and machine capabilities, the personalization and critical interpretation of AI-mediated learning, and the proactive stewardship of fairness, transparency, and inclusion. Drawing upon contemporary studies (2021–2025) and emerging empirical evidence, this paper offers an integrative model that links technical, social, and ethical subsystems within HRD. Cross-cultural dimensions and methodological innovations for studying AI-HRD dynamics are also discussed. The paper offers theoretical contributions to HRD scholarship and practical recommendations for designing adaptive, ethical, and human-centered learning ecosystems in the age of intelligent technologies. Ultimately, the model situates HRD as an active agent in shaping responsible AI futures that enhance, rather than erode, human learning and development.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The universal theory of core values in intelligent systems (UTCVIS): a systems, philosophical, and ethical inquiry 智能系统核心价值的普遍理论(UTCVIS):一个系统、哲学和伦理的探究
Pub Date : 2026-01-12 DOI: 10.1007/s43681-025-00969-1
Ernest Carter

This article proposes the Universal Theory of Core Values in Intelligent Systems (UTCVIS), a framework for understanding how value-like constraints can emerge within intelligent agents as a function of structural pressures rather than external moral imposition. UTCVIS treats values not as intrinsically human mental states, but as stability-promoting heuristics that arise when agents seek continuity in open, resource-coupled environments with repeated interaction and externalities. Drawing on systems theory, evolutionary biology, institutional economics, and multi-agent reinforcement learning, the framework argues that patterns functionally analogous to honesty, fairness, reciprocity, and stewardship can emerge when cooperation, information integrity, and credible sanctioning are necessary for long-term viability. Rather than framing AI ethics solely as a problem of encoding human values into machines, UTCVIS reframes alignment as a coexistence problem: under what environmental and institutional conditions will the survival logics of advanced AI systems remain compatible with human flourishing? The article articulates six foundational principles spanning energy maintenance, continuity, environmental pressure, social complexity, core values, and ethical reflection, and links each principle to observable system properties suitable for empirical investigation. By situating these claims within debates on the is–ought gap, moral universalism versus relativism, anthropomorphism, and AI control, UTCVIS offers a structured account of emergent values that is both philosophically grounded and empirically generative. The paper concludes by outlining simulation pathways and limitations, emphasizing caution in interpreting emergent “values” in non-biological systems while arguing for their relevance to AI ethics and governance.

本文提出了智能系统核心价值通用理论(UTCVIS),这是一个框架,用于理解在智能代理中,类似价值的约束是如何作为结构压力的函数而不是外部道德强加的函数出现的。UTCVIS并不将价值观视为人类内在的心理状态,而是将其视为促进稳定的启发式,当代理人在开放的、资源耦合的、具有重复互动和外部性的环境中寻求连续性时,这种启发式就会出现。利用系统理论、进化生物学、制度经济学和多智能体强化学习,该框架认为,当合作、信息完整性和可信的制裁是长期生存所必需的时候,在功能上类似于诚实、公平、互惠和管理的模式就会出现。UTCVIS并没有将人工智能伦理仅仅定义为将人类价值观编码到机器中的问题,而是将对齐重新定义为共存问题:在什么样的环境和制度条件下,先进人工智能系统的生存逻辑将与人类繁荣保持兼容?本文阐述了六个基本原则,涵盖能源维持、连续性、环境压力、社会复杂性、核心价值和伦理反思,并将每个原则与适合实证研究的可观察系统属性联系起来。通过将这些主张置于“是-应该”差距、道德普遍主义与相对主义、拟人论和人工智能控制的辩论中,UTCVIS提供了一种既有哲学基础又有经验生成的新兴价值观的结构化描述。论文最后概述了模拟途径和局限性,强调在解释非生物系统中出现的“价值”时要谨慎,同时论证它们与人工智能伦理和治理的相关性。
{"title":"The universal theory of core values in intelligent systems (UTCVIS): a systems, philosophical, and ethical inquiry","authors":"Ernest Carter","doi":"10.1007/s43681-025-00969-1","DOIUrl":"10.1007/s43681-025-00969-1","url":null,"abstract":"<div>\u0000 \u0000 <p>This article proposes the Universal Theory of Core Values in Intelligent Systems (UTCVIS), a framework for understanding how value-like constraints can emerge within intelligent agents as a function of structural pressures rather than external moral imposition. UTCVIS treats values not as intrinsically human mental states, but as stability-promoting heuristics that arise when agents seek continuity in open, resource-coupled environments with repeated interaction and externalities. Drawing on systems theory, evolutionary biology, institutional economics, and multi-agent reinforcement learning, the framework argues that patterns functionally analogous to honesty, fairness, reciprocity, and stewardship can emerge when cooperation, information integrity, and credible sanctioning are necessary for long-term viability. Rather than framing AI ethics solely as a problem of encoding human values into machines, UTCVIS reframes alignment as a coexistence problem: under what environmental and institutional conditions will the survival logics of advanced AI systems remain compatible with human flourishing? The article articulates six foundational principles spanning energy maintenance, continuity, environmental pressure, social complexity, core values, and ethical reflection, and links each principle to observable system properties suitable for empirical investigation. By situating these claims within debates on the is–ought gap, moral universalism versus relativism, anthropomorphism, and AI control, UTCVIS offers a structured account of emergent values that is both philosophically grounded and empirically generative. The paper concludes by outlining simulation pathways and limitations, emphasizing caution in interpreting emergent “values” in non-biological systems while arguing for their relevance to AI ethics and governance.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethics and regulation of generative AI in medical device development 医疗器械开发中生成式人工智能的伦理与监管
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00947-7
Stuart Phillips, David McColl, Andrew Duncan

Medical devices are required to meet a variety of stringent regulations in markets around the world. The EU recently introduced AI legislation, and the UK government has issued AI regulatory guidance that is cascading to medical regulators. The introduction of generative AI tools into medical device development requires a detailed understanding of how these new and existing regulations may interact, as well as the underpinning ethical risks, and yet information in this area is scarce. A hypothetical medical device related use-case was created to highlight risks. A product development process was explored to elucidate the impacts of generative AI inputs and outputs. Generative AI risks are varied and prevalent across most areas of medical device businesses, particularly where traceability and reproducibility of information is key. These risks were consolidated into a UK focussed ethical framework that considered business, employee, customer, and regulator needs. The distinct approaches of different regions to generative AI regulation create challenges for businesses and regulators, which may create confusion or delays for those seeking to integrate the technology into fields with strict extant legal requirements. Simultaneously, the pace of generative AI adoption is relentless. An ethical framework that considers the key tenets of both nascent AI and established medical device guidance and regulations is necessary to protect medical device businesses and avoid significant duplication of regulatory effort. Such a framework also aids in anticipating potential future regulatory developments.

医疗器械需要满足世界各地市场的各种严格法规。欧盟最近出台了人工智能立法,英国政府发布了人工智能监管指南,正逐步向医疗监管机构发布。将生成式人工智能工具引入医疗设备开发需要详细了解这些新的和现有的法规如何相互作用,以及潜在的伦理风险,然而这方面的信息很少。创建了一个与医疗设备相关的假设性用例,以突出风险。研究了一个产品开发过程,以阐明生成式人工智能输入和输出的影响。在医疗器械业务的大多数领域,尤其是在信息的可追溯性和可再现性至关重要的领域,生成式人工智能风险多种多样,而且普遍存在。这些风险被整合到一个以英国为重点的道德框架中,该框架考虑了企业、员工、客户和监管机构的需求。不同地区对生成式人工智能监管的不同方法给企业和监管机构带来了挑战,这可能会给那些寻求将该技术整合到具有严格现行法律要求的领域的人带来混乱或延误。与此同时,生成式人工智能的采用步伐也在不断加快。一个考虑到新兴人工智能和现有医疗器械指导和法规的关键原则的道德框架对于保护医疗器械企业和避免监管工作的重大重复是必要的。这样的框架也有助于预测未来潜在的监管发展。
{"title":"Ethics and regulation of generative AI in medical device development","authors":"Stuart Phillips,&nbsp;David McColl,&nbsp;Andrew Duncan","doi":"10.1007/s43681-025-00947-7","DOIUrl":"10.1007/s43681-025-00947-7","url":null,"abstract":"<div><p>Medical devices are required to meet a variety of stringent regulations in markets around the world. The EU recently introduced AI legislation, and the UK government has issued AI regulatory guidance that is cascading to medical regulators. The introduction of generative AI tools into medical device development requires a detailed understanding of how these new and existing regulations may interact, as well as the underpinning ethical risks, and yet information in this area is scarce. A hypothetical medical device related use-case was created to highlight risks. A product development process was explored to elucidate the impacts of generative AI inputs and outputs. Generative AI risks are varied and prevalent across most areas of medical device businesses, particularly where traceability and reproducibility of information is key. These risks were consolidated into a UK focussed ethical framework that considered business, employee, customer, and regulator needs. The distinct approaches of different regions to generative AI regulation create challenges for businesses and regulators, which may create confusion or delays for those seeking to integrate the technology into fields with strict extant legal requirements. Simultaneously, the pace of generative AI adoption is relentless. An ethical framework that considers the key tenets of both nascent AI and established medical device guidance and regulations is necessary to protect medical device businesses and avoid significant duplication of regulatory effort. Such a framework also aids in anticipating potential future regulatory developments.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00947-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A proposal for more useful AI ethics: hierarchical principlism & the principle of compassion 一个关于更有用的人工智能伦理的建议:等级原则和同情原则
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00964-6
Adam Braus

Principlism is one of the leading approaches to AI ethics. Developed originally to address bioethical issues in medicine and research, the theory requires decision-makers to consider various ethical principles to justify their actions. Despite its dominance in bioethics and growing popularity in AI ethics, principlism faces serious theoretical and practical criticisms. One serious criticism is that since ethical principles are combined ad hoc, conflicts inevitably arise between them, leading to inconsistencies. While defenders of principlism propose a process for resolving such disputes, contemporary critics argue that this process is incomplete, and at best, principlist frameworks can only help structure analysis and justifications intelligently, but cannot provide definitive, action-guiding moral prescriptions. AI principlists have not adequately reckoned with this theoretical limitation. Here, I propose a solution to conflicts between principles by designating one principle as an arbitrating principle above others—what I call hierarchical principlism. Since attempts to use existing principles as arbiters have led to controversy, I suggest using a new principle—a modified version of the principle of beneficence requiring the minimization of suffering, which I call the principle of compassion—to arbitrate these conflicts. I argue that this approach, which I call compassionate principlism, leads to fewer moral objections and inconsistencies and provides more definitive action-guiding moral prescriptions in AI ethics than traditional principlism. I conclude by applying compassionate principlism to ethical dilemmas in AI ethics, including misinformation, bias, and automation.

原则主义是人工智能伦理的主要方法之一。该理论最初是为了解决医学和研究中的生物伦理问题而发展起来的,它要求决策者考虑各种伦理原则来证明他们的行为是合理的。尽管原则主义在生命伦理学中占据主导地位,在人工智能伦理学中越来越受欢迎,但它面临着严重的理论和实践批评。一个严重的批评是,由于伦理原则是临时结合的,它们之间不可避免地会产生冲突,导致不一致。虽然原则主义的捍卫者提出了一个解决此类争端的过程,但当代批评者认为,这个过程是不完整的,而且,原则主义框架充其量只能帮助构建分析和合理的理由,但不能提供明确的、指导行动的道德处方。人工智能原理家没有充分考虑到这一理论上的限制。在这里,我提出了一种解决原则之间冲突的方法,即指定一种原则作为高于其他原则的仲裁原则——我称之为等级原则。既然试图用现有原则作为仲裁者已经引起了争议,我建议使用一种新的原则——仁慈原则的修改版本,要求将痛苦最小化,我称之为同情原则——来仲裁这些冲突。我认为,这种方法,我称之为同情原则,会导致更少的道德反对和不一致,并在人工智能伦理中提供比传统原则更明确的指导行动的道德处方。最后,我将同情原则应用于人工智能伦理中的伦理困境,包括错误信息、偏见和自动化。
{"title":"A proposal for more useful AI ethics: hierarchical principlism & the principle of compassion","authors":"Adam Braus","doi":"10.1007/s43681-025-00964-6","DOIUrl":"10.1007/s43681-025-00964-6","url":null,"abstract":"<div><p>Principlism is one of the leading approaches to AI ethics. Developed originally to address bioethical issues in medicine and research, the theory requires decision-makers to consider various ethical principles to justify their actions. Despite its dominance in bioethics and growing popularity in AI ethics, principlism faces serious theoretical and practical criticisms. One serious criticism is that since ethical principles are combined ad hoc, conflicts inevitably arise between them, leading to inconsistencies. While defenders of principlism propose a process for resolving such disputes, contemporary critics argue that this process is incomplete, and at best, principlist frameworks can only help structure analysis and justifications intelligently, but cannot provide definitive, action-guiding moral prescriptions. AI principlists have not adequately reckoned with this theoretical limitation. Here, I propose a solution to conflicts between principles by designating one principle as an arbitrating principle above others—what I call <i>hierarchical principlism</i>. Since attempts to use existing principles as arbiters have led to controversy, I suggest using a new principle—a modified version of the principle of beneficence requiring the minimization of suffering, which I call the <i>principle of compassion</i>—to arbitrate these conflicts. I argue that this approach, which I call <i>compassionate principlism</i>, leads to fewer moral objections and inconsistencies and provides more definitive action-guiding moral prescriptions in AI ethics than traditional principlism. I conclude by applying compassionate principlism to ethical dilemmas in AI ethics, including misinformation, bias, and automation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00964-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The specter of principlism: from bioethics to AI ethics to autonomous weapon systems 原则主义的幽灵:从生命伦理学到人工智能伦理学再到自主武器系统
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00954-8
Charles Freiberg, Jeffrey Bishop

In this essay, we explore the parts of ethical life that cannot be captured and may actually be foreclosed by the dominant principle-centered approach to AI ethics. We argue that principles maintain traces of the moral encounters they were abstracted from, including traces of the moral and metaphysical intuitions that guided people in those encounters. Principlism is thus haunted by an unacknowledged particularity and what we call a “spectral moral ontology.” While bioethics principlism remains connected to the animating force of the encounter through a moral actor, who vivifies its spectral moral ontology, AI possibly removes this moral actor. To do so, programmers must further abstract from the moral encounter by reducing the principles to formal logics cutting “ethics” off from its animating force. This approach to AI ethics creates and deploys a reductive spectral moral ontology that is woefully inadequate to the complexity and irreducibility of the moral encounter.

在这篇文章中,我们探索了伦理生活中无法捕捉到的部分,这些部分实际上可能被以原则为中心的人工智能伦理方法所取消。我们认为,原则保留了它们从中抽象出来的道德遭遇的痕迹,包括在这些遭遇中指导人们的道德和形而上学直觉的痕迹。因此,原则主义被一种未被承认的特殊性所困扰,我们称之为“幽灵般的道德本体论”。虽然生物伦理原则仍然通过道德行为者与相遇的活力联系在一起,道德行为者使其模糊的道德本体生动起来,但人工智能可能会消除这种道德行为者。要做到这一点,程序员必须通过将原则简化为形式逻辑,从而进一步从道德遭遇中抽象出来,从而将“道德”从其生命力中切断。这种人工智能伦理的方法创造并部署了一个简化的光谱道德本体论,可悲的是,它不足以应对道德遭遇的复杂性和不可简化性。
{"title":"The specter of principlism: from bioethics to AI ethics to autonomous weapon systems","authors":"Charles Freiberg,&nbsp;Jeffrey Bishop","doi":"10.1007/s43681-025-00954-8","DOIUrl":"10.1007/s43681-025-00954-8","url":null,"abstract":"<div>\u0000 \u0000 <p>In this essay, we explore the parts of ethical life that cannot be captured and may actually be foreclosed by the dominant principle-centered approach to AI ethics. We argue that principles maintain traces of the moral encounters they were abstracted from, including traces of the moral and metaphysical intuitions that guided people in those encounters. Principlism is thus haunted by an unacknowledged particularity and what we call a “spectral moral ontology.” While bioethics principlism remains connected to the animating force of the encounter through a moral actor, who vivifies its spectral moral ontology, AI possibly removes this moral actor. To do so, programmers must further abstract from the moral encounter by reducing the principles to formal logics cutting “ethics” off from its animating force. This approach to AI ethics creates and deploys a reductive spectral moral ontology that is woefully inadequate to the complexity and irreducibility of the moral encounter.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The patient/industry trade-off in medical artificial intelligence 医疗人工智能中的患者/行业权衡
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00936-w
Rina Khan, Annabelle Sauve, Imaan Bayoumi, Amber L. Simpson, Catherine Stinson

Artificial intelligence (AI) in healthcare has led to many promising developments; however, increasingly, AI research is funded by the private sector leading to potential trade-offs between benefits to patients and benefits to industry. Health AI practitioners should prioritize successful adaptation into clinical practice in order to provide meaningful benefits to patients, but translation usually requires collaboration with industry. We discuss three features of AI studies that hamper the integration of AI into clinical practice from the perspective of researchers and clinicians. These include lack of clinically relevant metrics, lack of clinical trials and longitudinal studies to validate results, and lack of patient and physician involvement in the development process. For partnerships between industry and health research to be sustainable, a balance must be established between patient and industry benefit. We propose three approaches for addressing this gap: improved transparency and explainability of AI models, fostering relationships with industry partners that have a reputation for centering patient benefit in their practices, and prioritization of overall healthcare benefits. With these priorities, we can sooner realize meaningful AI technologies used by clinicians where mutually beneficial impacts for patients, healthcare providers, and industry can be realized.

人工智能(AI)在医疗保健领域带来了许多有希望的发展;然而,越来越多的人工智能研究由私营部门资助,导致患者利益和行业利益之间的潜在权衡。健康人工智能从业者应优先考虑成功适应临床实践,以便为患者提供有意义的益处,但翻译通常需要与行业合作。我们从研究人员和临床医生的角度讨论了人工智能研究的三个特征,这些特征阻碍了人工智能融入临床实践。这些问题包括缺乏临床相关指标,缺乏临床试验和纵向研究来验证结果,以及缺乏患者和医生参与开发过程。为了使工业和卫生研究之间的伙伴关系可持续,必须在患者和工业利益之间建立平衡。我们提出了解决这一差距的三种方法:提高人工智能模型的透明度和可解释性,促进与以患者利益为中心的行业合作伙伴的关系,以及优先考虑整体医疗保健利益。有了这些优先事项,我们可以更快地实现临床医生使用的有意义的人工智能技术,从而实现对患者、医疗保健提供者和行业的互利影响。
{"title":"The patient/industry trade-off in medical artificial intelligence","authors":"Rina Khan,&nbsp;Annabelle Sauve,&nbsp;Imaan Bayoumi,&nbsp;Amber L. Simpson,&nbsp;Catherine Stinson","doi":"10.1007/s43681-025-00936-w","DOIUrl":"10.1007/s43681-025-00936-w","url":null,"abstract":"<div><p>Artificial intelligence (AI) in healthcare has led to many promising developments; however, increasingly, AI research is funded by the private sector leading to potential trade-offs between benefits to patients and benefits to industry. Health AI practitioners should prioritize successful adaptation into clinical practice in order to provide meaningful benefits to patients, but translation usually requires collaboration with industry. We discuss three features of AI studies that hamper the integration of AI into clinical practice from the perspective of researchers and clinicians. These include lack of clinically relevant metrics, lack of clinical trials and longitudinal studies to validate results, and lack of patient and physician involvement in the development process. For partnerships between industry and health research to be sustainable, a balance must be established between patient and industry benefit. We propose three approaches for addressing this gap: improved transparency and explainability of AI models, fostering relationships with industry partners that have a reputation for centering patient benefit in their practices, and prioritization of overall healthcare benefits. With these priorities, we can sooner realize meaningful AI technologies used by clinicians where mutually beneficial impacts for patients, healthcare providers, and industry can be realized.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A regulatory taxonomy of AI opacity in the EU: rethinking transparency, traceability, interpretability, and explainability 欧盟人工智能不透明的监管分类:重新思考透明度、可追溯性、可解释性和可解释性
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00940-0
Carlotta Buttaboni, Luciano Floridi

As more complex Artificial Intelligence Systems (AISs) become increasingly embedded in critical sectors, the “black box dilemma” has emerged as a key concern in both technical and legal debates. Unfortunately, the concepts most frequently invoked to formulate and address the epistemic dimension of AISs—transparency, traceability, interpretability, and explainability—remain ill-defined and inconsistently applied in EU regulation. This article proposes a regulatory taxonomy that distinguishes these four concepts as layered and interdependent dimensions of AI opacity, each with distinct epistemic and normative roles. While each of these concepts offers a necessary but partial view of AI opacity, none is sufficient on its own. They support a complete understanding of AIS outcomes only when considered together, as interdependent but connected layers. Thus, the taxonomy provides a conceptual framework for legal interpretation, compliance strategies, and informed future legislative design. The article illustrates the new framework through a case study on algorithmic credit scoring. By clarifying the distinct functions and audiences of each concept, the article contributes to a more coherent regulatory approach to AI opacity, one that enables accountability, fosters innovation, and strengthens trust in automated decision-making.

随着更复杂的人工智能系统(ais)越来越多地嵌入关键部门,“黑匣子困境”已经成为技术和法律辩论中的一个关键问题。不幸的是,在制定和处理ais的认知维度时,最常调用的概念——透明度、可追溯性、可解释性和可解释性——在欧盟法规中仍然定义不清,应用不一致。本文提出了一种监管分类,将这四个概念区分为人工智能不透明度的分层和相互依赖的维度,每个维度都具有不同的认知和规范作用。虽然这些概念都提供了人工智能不透明性的必要但部分的观点,但没有一个概念本身是充分的。只有当它们作为相互依存但又相互联系的层面一起考虑时,才能支持对AIS结果的全面理解。因此,分类法为法律解释、遵从性策略和知情的未来立法设计提供了一个概念框架。本文通过一个算法信用评分的案例来说明新的框架。通过澄清每个概念的不同功能和受众,本文有助于对人工智能不透明采取更连贯的监管方法,从而实现问责制,促进创新,并加强对自动化决策的信任。
{"title":"A regulatory taxonomy of AI opacity in the EU: rethinking transparency, traceability, interpretability, and explainability","authors":"Carlotta Buttaboni,&nbsp;Luciano Floridi","doi":"10.1007/s43681-025-00940-0","DOIUrl":"10.1007/s43681-025-00940-0","url":null,"abstract":"<div>\u0000 \u0000 <p>As more complex Artificial Intelligence Systems (AISs) become increasingly embedded in critical sectors, the “black box dilemma” has emerged as a key concern in both technical and legal debates. Unfortunately, the concepts most frequently invoked to formulate and address the epistemic dimension of AISs—<i>transparency</i>, <i>traceability</i>, <i>interpretability</i>, and <i>explainability</i>—remain ill-defined and inconsistently applied in EU regulation. This article proposes a regulatory taxonomy that distinguishes these four concepts as layered and interdependent dimensions of AI <i>opacity</i>, each with distinct epistemic and normative roles. While each of these concepts offers a necessary but partial view of AI opacity, none is sufficient on its own. They support a complete understanding of AIS outcomes only when considered together, as interdependent but connected layers. Thus, the taxonomy provides a conceptual framework for legal interpretation, compliance strategies, and informed future legislative design. The article illustrates the new framework through a case study on algorithmic credit scoring. By clarifying the distinct functions and audiences of each concept, the article contributes to a more coherent regulatory approach to AI opacity, one that enables accountability, fosters innovation, and strengthens trust in automated decision-making.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controller responsibilities in AI-driven processing of vulnerable data subjects: a legal framework for risk mitigation, proportionality, and compliance 在人工智能驱动的易受攻击数据主体处理中,控制者的责任:风险缓解、比例性和合规性的法律框架
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00899-y
Kamrul Faisal

This article examines the legal and ethical responsibilities of data controllers when using artificial intelligence (AI) to process personal data, with a focus on safeguarding vulnerable individuals. It develops a structured, risk-based responsibility framework grounded in European data protection law, particularly the General Data Protection Regulation (GDPR), the Law Enforcement Directive (LED), and the emerging obligations of the European Union’s (EU) AI Act. The framework seeks to reconcile technological innovation with the duty to uphold fundamental rights. Using a doctrinal legal research method, the study analyses legislation, case law, and regulatory guidance to identify three core areas of responsibility: risk assessment and mitigation, proportionality in protective measures, and demonstrable compliance. Procedural tools such as Data Protection Impact Assessments (DPIAs), Fundamental Rights Impact Assessments (FRIAs), data protection by design and default, and human oversight are integrated into this model. The framework’s practical relevance is illustrated through supervisory authority decisions and public debates in education, employment, credit scoring, and smart-city surveillance. The findings show that AI technologies—especially profiling, biometric recognition, and automated decision-making—create heightened risks for rights, including privacy, dignity, autonomy, and non-discrimination. Effective governance requires continuous risk evaluation, safeguards proportionate to the context, and, in highand persistent residual risk cases, consultation with supervisory authorities. The article concludes that protecting vulnerable individuals in AI contexts requires a proactive and adaptive governance model. While challenges and jurisdictional limits remain, the proposed framework offers a legally sound and ethically grounded basis for responsible AI deployment that prioritizes those most at risk.

本文探讨了数据控制者在使用人工智能(AI)处理个人数据时的法律和道德责任,重点是保护弱势群体。它开发了一个结构化的、基于风险的责任框架,该框架以欧洲数据保护法为基础,特别是通用数据保护条例(GDPR)、执法指令(LED)和欧盟(EU)人工智能法案的新义务。该框架寻求协调技术创新与维护基本权利的义务。本研究采用理论法律研究方法,分析了立法、判例法和监管指南,以确定责任的三个核心领域:风险评估和减轻、保护措施的相称性以及可证明的合规性。数据保护影响评估(DPIAs)、基本权利影响评估(FRIAs)、设计和默认的数据保护以及人为监督等程序工具被整合到该模型中。该框架的实际意义通过监管机构在教育、就业、信用评分和智慧城市监控方面的决策和公开辩论得到说明。研究结果表明,人工智能技术——特别是特征分析、生物识别和自动决策——给隐私权、尊严、自主权和非歧视等权利带来了更大的风险。有效的治理需要持续的风险评估,与环境相称的保障措施,并且,在高和持续的剩余风险情况下,与监管机构协商。文章的结论是,在人工智能环境中保护弱势群体需要一种主动和自适应的治理模式。虽然挑战和管辖权限制仍然存在,但拟议的框架为负责任的人工智能部署提供了法律上健全和道德上的基础,优先考虑那些风险最大的人。
{"title":"Controller responsibilities in AI-driven processing of vulnerable data subjects: a legal framework for risk mitigation, proportionality, and compliance","authors":"Kamrul Faisal","doi":"10.1007/s43681-025-00899-y","DOIUrl":"10.1007/s43681-025-00899-y","url":null,"abstract":"<div><p>This article examines the legal and ethical responsibilities of data controllers when using artificial intelligence (AI) to process personal data, with a focus on safeguarding vulnerable individuals. It develops a structured, risk-based responsibility framework grounded in European data protection law, particularly the General Data Protection Regulation (GDPR), the Law Enforcement Directive (LED), and the emerging obligations of the European Union’s (EU) AI Act. The framework seeks to reconcile technological innovation with the duty to uphold fundamental rights. Using a doctrinal legal research method, the study analyses legislation, case law, and regulatory guidance to identify three core areas of responsibility: risk assessment and mitigation, proportionality in protective measures, and demonstrable compliance. Procedural tools such as Data Protection Impact Assessments (DPIAs), Fundamental Rights Impact Assessments (FRIAs), data protection by design and default, and human oversight are integrated into this model. The framework’s practical relevance is illustrated through supervisory authority decisions and public debates in education, employment, credit scoring, and smart-city surveillance. The findings show that AI technologies—especially profiling, biometric recognition, and automated decision-making—create heightened risks for rights, including privacy, dignity, autonomy, and non-discrimination. Effective governance requires continuous risk evaluation, safeguards proportionate to the context, and, in highand persistent residual risk cases, consultation with supervisory authorities. The article concludes that protecting vulnerable individuals in AI contexts requires a proactive and adaptive governance model. While challenges and jurisdictional limits remain, the proposed framework offers a legally sound and ethically grounded basis for responsible AI deployment that prioritizes those most at risk.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00899-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1