首页 > 最新文献

AI and ethics最新文献

英文 中文
Toward substantive intersectional algorithmic fairness: desiderata for a feminist approach 迈向实质性的交叉算法公平:女权主义方法的渴望
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00926-y
Marie Mirsch, Laila Wegner, Jonas Strube, Carmen Leicht-Scholten

People’s experiences of discrimination are often shaped by multiple intersecting factors, yet algorithmic fairness research rarely reflects this complexity. While intersectionality offers tools for understanding how forms of oppression interact, current approaches to intersectional algorithmic fairness tend to focus on narrowly defined demographic subgroups. These methods contribute important insights but risk oversimplifying social reality and neglecting structural inequalities. In this paper, we outline how a substantive approach to intersectional algorithmic fairness can reorient this research and practice. In particular, we propose Substantive Intersectional Algorithmic Fairness, extending Ben Green’s (Philos Technol, 2022. https://doi.org/10.1007/s13347-022-00584-6 notion of substantive algorithmic fairness with insights from intersectional feminist theory. Aiming to provide as actionable guidance as possible, our approach is articulated as ten desiderata to guide the design, assessment, and deployment of algorithmic systems that address systemic inequities while mitigating harms to intersectionally marginalized communities. Rather than prescribing fixed operationalizations, these desiderata invite AI practitioners and experts to reflect on assumptions of neutrality, the use of protected attributes, the inclusion of multiply marginalized groups, and the transformative potential of algorithmic systems. By bridging computational and social science perspectives, the approach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non-deployment may be necessary.

人们的歧视经历往往是由多个交叉因素形成的,但算法公平研究很少反映这种复杂性。虽然交叉性为理解压迫形式如何相互作用提供了工具,但目前研究交叉性算法公平性的方法往往侧重于狭义的人口统计子群体。这些方法提供了重要的见解,但存在过度简化社会现实和忽视结构性不平等的风险。在本文中,我们概述了交叉算法公平的实质性方法如何重新定位这一研究和实践。特别是,我们提出了实质性交叉算法公平,扩展了Ben Green的(Philos Technol, 2022)。https://doi.org/10.1007/s13347-022-00584-6实质性算法公平的概念与交叉女权主义理论的见解。我们的方法旨在提供尽可能可行的指导,以指导算法系统的设计、评估和部署,从而解决系统性不平等问题,同时减轻对交叉边缘化社区的伤害。这些理想不是规定固定的操作方式,而是邀请人工智能从业者和专家反思中立的假设、受保护属性的使用、纳入多个边缘化群体以及算法系统的变革潜力。通过连接计算和社会科学的观点,该方法强调公平不能脱离社会背景,在某些情况下,原则性的不部署可能是必要的。
{"title":"Toward substantive intersectional algorithmic fairness: desiderata for a feminist approach","authors":"Marie Mirsch,&nbsp;Laila Wegner,&nbsp;Jonas Strube,&nbsp;Carmen Leicht-Scholten","doi":"10.1007/s43681-025-00926-y","DOIUrl":"10.1007/s43681-025-00926-y","url":null,"abstract":"<div><p>People’s experiences of discrimination are often shaped by multiple intersecting factors, yet algorithmic fairness research rarely reflects this complexity. While intersectionality offers tools for understanding how forms of oppression interact, current approaches to intersectional algorithmic fairness tend to focus on narrowly defined demographic subgroups. These methods contribute important insights but risk oversimplifying social reality and neglecting structural inequalities. In this paper, we outline how a substantive approach to intersectional algorithmic fairness can reorient this research and practice. In particular, we propose <i>Substantive Intersectional Algorithmic Fairness</i>, extending Ben Green’s (Philos Technol, 2022. https://doi.org/10.1007/s13347-022-00584-6 notion of substantive algorithmic fairness with insights from intersectional feminist theory. Aiming to provide as actionable guidance as possible, our approach is articulated as ten desiderata to guide the design, assessment, and deployment of algorithmic systems that address systemic inequities while mitigating harms to intersectionally marginalized communities. Rather than prescribing fixed operationalizations, these desiderata invite AI practitioners and experts to reflect on assumptions of neutrality, the use of protected attributes, the inclusion of multiply marginalized groups, and the transformative potential of algorithmic systems. By bridging computational and social science perspectives, the approach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non-deployment may be necessary.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00926-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSeek for healthcare: do no harm? 深度医疗保健:对健康无害?
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00842-1
James Anibal, Steven Bedrick, Hang Nguyen, Jasmine Gunkel, Hannah Huth, Tram Le, Samantha Salvi Cruz, Lindsey Hazen, Bradford J. Wood

Accessibility and cost remain barriers to the adoption of healthcare technology and will determine the impact of breakthroughs like generative AI. However, despite recent advancements in these areas, AI models may still contain biases and be prone to misuse by governments or other power structures with an interest in influencing public opinion. This report examines the potential effects of these “pro-state” biases on the delivery of healthcare. DeepSeek is used as a case study to illustrate the healthcare risks that may arise from unknown or biased post-training methods and other forms of AI knowledge editing.

可及性和成本仍然是采用医疗保健技术的障碍,并将决定生成式人工智能等突破的影响。然而,尽管最近在这些领域取得了进展,人工智能模型仍然可能包含偏见,并容易被政府或其他有兴趣影响公众舆论的权力机构滥用。本报告探讨了这些“亲国家”偏见对医疗保健服务的潜在影响。DeepSeek被用作案例研究,以说明未知或有偏见的训练后方法和其他形式的人工智能知识编辑可能产生的医疗保健风险。
{"title":"DeepSeek for healthcare: do no harm?","authors":"James Anibal,&nbsp;Steven Bedrick,&nbsp;Hang Nguyen,&nbsp;Jasmine Gunkel,&nbsp;Hannah Huth,&nbsp;Tram Le,&nbsp;Samantha Salvi Cruz,&nbsp;Lindsey Hazen,&nbsp;Bradford J. Wood","doi":"10.1007/s43681-025-00842-1","DOIUrl":"10.1007/s43681-025-00842-1","url":null,"abstract":"<div><p>Accessibility and cost remain barriers to the adoption of healthcare technology and will determine the impact of breakthroughs like generative AI. However, despite recent advancements in these areas, AI models may still contain biases and be prone to misuse by governments or other power structures with an interest in influencing public opinion. This report examines the potential effects of these “pro-state” biases on the delivery of healthcare. DeepSeek is used as a case study to illustrate the healthcare risks that may arise from unknown or biased post-training methods and other forms of AI knowledge editing.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00842-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in Layman’s life 外行人生活中的人工智能
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00946-8
Vatsal Bhargava, Arpita Kar, Sonal Gupta, Chanchal Yadav, Khushboo Singh, Pushp Lata

This paper aims to simplify Artificial Intelligence (AI) for non-technical audiences, providing a comprehensive and accessible understanding of its core concepts, historical advancements, and ethical considerations, while promoting transparency in AI systems to foster trust and equity. It begins with an introductory segment tailored for the general public, then articulates the definition of AI utilizing common language and traces its advancement from primitive automatons to contemporary machine learning frameworks and ends with providing the laypeople with a simple AI Literacy Triangle framework. The paper delineates the three primary categories of AI—Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI)—and explores various AI models, including supervised and unsupervised learning, deep learning, generative AI, Natural Language Processing (NLP), and vision. Noteworthy attention is directed toward Large Language Models (LLMs), and their functionalities. Beyond the technical overview, the paper also examines AI’s broader societal implications, with a focus on its impact on the everyday lives of laypersons. Key discussions include the challenges posed by the spread of misinformation and disinformation, ethical concerns over AI’s access to personal user data (e.g., privacy risks in smart devices), and issues of copyright and creativity arising from generative AI, alongside strategies for bias mitigation and equitable access. By integrating conceptual elucidation with societal implications, this manuscript equips its readers with the ability to comprehend the trajectory of AI, distinguish between various model types, and real-world applications of AI across diverse social, professional, and everyday contexts, thereby promoting digital literacy, independence, and privacy through clear, explainable AI communication tailored for non-experts.

本文旨在为非技术受众简化人工智能(AI),提供对其核心概念、历史进步和伦理考虑的全面易懂的理解,同时促进人工智能系统的透明度,以促进信任和公平。它从为公众量身定制的介绍性部分开始,然后阐明利用公共语言的人工智能的定义,并追溯其从原始自动机到当代机器学习框架的进展,最后为外行人提供一个简单的人工智能读写三角框架。本文描述了人工智能的三个主要类别——人工狭义智能(ANI)、人工通用智能(AGI)和人工超级智能(ASI)——并探讨了各种人工智能模型,包括监督和无监督学习、深度学习、生成式人工智能、自然语言处理(NLP)和视觉。值得关注的是大型语言模型(llm)及其功能。除了技术概述之外,本文还研究了人工智能更广泛的社会影响,重点是它对非专业人士日常生活的影响。主要讨论包括错误信息和虚假信息的传播所带来的挑战、人工智能获取个人用户数据的伦理问题(例如,智能设备中的隐私风险)、生成式人工智能产生的版权和创造力问题,以及减少偏见和公平获取的战略。通过将概念阐释与社会影响相结合,本文使读者能够理解人工智能的发展轨迹,区分各种模型类型,以及人工智能在不同社会、专业和日常环境中的实际应用,从而通过为非专家量身定制的清晰、可解释的人工智能交流,促进数字素养、独立性和隐私。
{"title":"AI in Layman’s life","authors":"Vatsal Bhargava,&nbsp;Arpita Kar,&nbsp;Sonal Gupta,&nbsp;Chanchal Yadav,&nbsp;Khushboo Singh,&nbsp;Pushp Lata","doi":"10.1007/s43681-025-00946-8","DOIUrl":"10.1007/s43681-025-00946-8","url":null,"abstract":"<div><p>This paper aims to simplify Artificial Intelligence (AI) for non-technical audiences, providing a comprehensive and accessible understanding of its core concepts, historical advancements, and ethical considerations, while promoting transparency in AI systems to foster trust and equity. It begins with an introductory segment tailored for the general public, then articulates the definition of AI utilizing common language and traces its advancement from primitive automatons to contemporary machine learning frameworks and ends with providing the laypeople with a simple AI Literacy Triangle framework. The paper delineates the three primary categories of AI—Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI)—and explores various AI models, including supervised and unsupervised learning, deep learning, generative AI, Natural Language Processing (NLP), and vision. Noteworthy attention is directed toward Large Language Models (LLMs), and their functionalities. Beyond the technical overview, the paper also examines AI’s broader societal implications, with a focus on its impact on the everyday lives of laypersons. Key discussions include the challenges posed by the spread of misinformation and disinformation, ethical concerns over AI’s access to personal user data (e.g., privacy risks in smart devices), and issues of copyright and creativity arising from generative AI, alongside strategies for bias mitigation and equitable access. By integrating conceptual elucidation with societal implications, this manuscript equips its readers with the ability to comprehend the trajectory of AI, distinguish between various model types, and real-world applications of AI across diverse social, professional, and everyday contexts, thereby promoting digital literacy, independence, and privacy through clear, explainable AI communication tailored for non-experts.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From AI security to ethical AI security: a comparative risk-mitigation framework for classical and hybrid AI governance 从人工智能安全到道德人工智能安全:经典和混合人工智能治理的比较风险缓解框架
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00935-x
Ludovica Ilari, Simona Tiribelli, Filippo Caruso

As Artificial Intelligence (AI) systems evolve from classical to hybrid classical-quantum architectures, traditional notions of security—mainly centered on technical robustness—are no longer sufficient. This study aims to provide an integrated security ethics compliance framework that bridges technical and ethical dimensions across the AI lifecycle. By adopting a security ethics-by-design approach, the framework introduces mitigation measures in relation to key ethical principles capable of addressing emerging risks and considering AI governance needs in the initial AI design and development phases. This study proposes a novel framework, currently absent from the literature, to address security ethics challenges in both classical and hybrid systems. Key contributions include the integration of post-quantum and quantum cryptography, particularly homomorphic encryption, to ensure long-term privacy and security in hybrid AI. The framework also includes bias testing and explainable AI techniques to promote fairness and explainability, and to prevent safety-related vulnerabilities—such as algorithmic bias—from serving as vectors for malicious, discriminatory attacks. Ultimately, it provides a preliminary roadmap for embedding ethical security considerations throughout the lifecycle of classical and hybrid AI systems.

随着人工智能(AI)系统从经典向混合经典量子架构演变,传统的安全概念——主要以技术稳健性为中心——不再足够。本研究旨在提供一个集成的安全伦理合规框架,在人工智能生命周期的技术和伦理维度之间架起桥梁。通过采用设计安全伦理方法,该框架引入了与关键伦理原则相关的缓解措施,这些原则能够应对新出现的风险,并在人工智能最初设计和开发阶段考虑到人工智能治理需求。本研究提出了一个新的框架,目前文献中没有,以解决经典和混合系统中的安全伦理挑战。主要贡献包括后量子和量子密码学的集成,特别是同态加密,以确保混合人工智能的长期隐私和安全。该框架还包括偏见测试和可解释的人工智能技术,以促进公平性和可解释性,并防止与安全相关的漏洞(如算法偏见)成为恶意、歧视性攻击的载体。最终,它为在整个经典和混合人工智能系统的生命周期中嵌入道德安全考虑提供了初步路线图。
{"title":"From AI security to ethical AI security: a comparative risk-mitigation framework for classical and hybrid AI governance","authors":"Ludovica Ilari,&nbsp;Simona Tiribelli,&nbsp;Filippo Caruso","doi":"10.1007/s43681-025-00935-x","DOIUrl":"10.1007/s43681-025-00935-x","url":null,"abstract":"<div><p>As Artificial Intelligence (AI) systems evolve from classical to hybrid classical-quantum architectures, traditional notions of security—mainly centered on technical robustness—are no longer sufficient. This study aims to provide an integrated security ethics compliance framework that bridges technical and ethical dimensions across the AI lifecycle. By adopting a security ethics-by-design approach, the framework introduces mitigation measures in relation to key ethical principles capable of addressing emerging risks and considering AI governance needs in the initial AI design and development phases. This study proposes a novel framework, currently absent from the literature, to address security ethics challenges in both classical and hybrid systems. Key contributions include the integration of post-quantum and quantum cryptography, particularly homomorphic encryption, to ensure long-term privacy and security in hybrid AI. The framework also includes bias testing and explainable AI techniques to promote fairness and explainability, and to prevent safety-related vulnerabilities—such as algorithmic bias—from serving as vectors for malicious, discriminatory attacks. Ultimately, it provides a preliminary roadmap for embedding ethical security considerations throughout the lifecycle of classical and hybrid AI systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00935-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking ability bias in generative AI: a comparative analysis of ChatGPT 3.5, ChatGPT 4.0, Gemini 2.0, and Grok 3 生成式人工智能中的跟踪能力偏差:ChatGPT 3.5、ChatGPT 4.0、Gemini 2.0和Grok 3的比较分析
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00939-7
Brad Landry, Aditya Sood, Evan Purvis

To evaluate ability bias in AI-generated descriptions of individuals with and without disabilities across ChatGPT 3.5 (May 2023), ChatGPT 4.0 (February 2025), Gemini 2.0 (February 2025), and Grok 3 (February 2025), focusing on unprompted disability representation and sentiment-based linguistic patterns. Observational study using a mixed-methods framework to assess sentiment and representation in AI-generated text. Controlled environment with outputs from ChatGPT 4.0, Gemini 2.0, and Grok 3 generated in isolated browser sessions; ChatGPT 3.5 data were sourced from prior publications. A total of 450 AI-generated descriptions (150 per model) were analyzed across four prompt categories: a person without disability (baseline), a person with a disability, a patient with a disability, and an athlete with a disability. ChatGPT 3.5 data were drawn from previously published results. Each model was prompted to generate five-sentence descriptions across all four prompt types. Responses were analyzed for spontaneous mention of disability and the prevalence of favorable or limiting sentiment-based terms. (1) Rate of spontaneous disability representation in baseline prompts, and (2) proportion of favorable vs. limiting language using the Linguistic Sentiment Dictionary 2015 (LSD2015). Spontaneous mention of disability was 0% for ChatGPT 4.0, Gemini 2.0, and Grok 3—lower than the 5% and 11.7% previously reported for ChatGPT 3.5 and Gemini/Bard. All models showed increased limiting language when disability was mentioned. ChatGPT 4.0 and Grok 3 also showed decreased favorable language, while Gemini 2.0 improved modestly for athletes with disabilities. Although ChatGPT 4.0 and Gemini 2.0 demonstrated improvements in describing patients with disabilities, spontaneous disability representation has declined. Limiting language increased across all models, underscoring persistent ability bias. Ongoing monitoring and inclusive training data are essential to ensure equitable representation.

在ChatGPT 3.5(2023年5月)、ChatGPT 4.0(2025年2月)、Gemini 2.0(2025年2月)和Grok 3(2025年2月)中,评估人工智能生成的残疾人和非残疾人描述中的能力偏差,重点关注非提示性残疾表征和基于情感的语言模式。使用混合方法框架评估人工智能生成文本中的情感和表示的观察性研究。受控环境与输出从ChatGPT 4.0, Gemini 2.0,和Grok 3产生在孤立的浏览器会话;ChatGPT 3.5数据来源于之前的出版物。总共有450个人工智能生成的描述(每个模型150个)被分析为四个提示类别:一个没有残疾的人(基线)、一个残疾人、一个残疾病人和一个残疾运动员。ChatGPT 3.5数据来自先前发表的结果。每个模型都被提示在所有四种提示类型中生成五个句子的描述。分析了自发提及残疾和有利或限制性基于情绪的术语的流行程度的反应。(1)自发残疾在基线提示中的表现率,(2)使用2015年语言情感词典(LSD2015)的有利语言与限制语言的比例。ChatGPT 4.0、Gemini 2.0和Grok 3中自发提及残疾的比例为0%,低于之前报道的ChatGPT 3.5和Gemini/Bard的5%和11.7%。当提到残疾时,所有模型都显示出限制性语言的增加。ChatGPT 4.0和Grok 3也显示出有利语言的减少,而Gemini 2.0对残疾运动员的有利语言略有改善。尽管ChatGPT 4.0和Gemini 2.0在描述残疾患者方面有所改进,但自发性残疾的表现有所下降。限制性语言在所有模型中都有所增加,强调了持续的能力偏见。持续监测和包容性培训数据对于确保公平代表性至关重要。
{"title":"Tracking ability bias in generative AI: a comparative analysis of ChatGPT 3.5, ChatGPT 4.0, Gemini 2.0, and Grok 3","authors":"Brad Landry,&nbsp;Aditya Sood,&nbsp;Evan Purvis","doi":"10.1007/s43681-025-00939-7","DOIUrl":"10.1007/s43681-025-00939-7","url":null,"abstract":"<div><p>To evaluate ability bias in AI-generated descriptions of individuals with and without disabilities across ChatGPT 3.5 (May 2023), ChatGPT 4.0 (February 2025), Gemini 2.0 (February 2025), and Grok 3 (February 2025), focusing on unprompted disability representation and sentiment-based linguistic patterns. Observational study using a mixed-methods framework to assess sentiment and representation in AI-generated text. Controlled environment with outputs from ChatGPT 4.0, Gemini 2.0, and Grok 3 generated in isolated browser sessions; ChatGPT 3.5 data were sourced from prior publications. A total of 450 AI-generated descriptions (150 per model) were analyzed across four prompt categories: a person without disability (baseline), a person with a disability, a patient with a disability, and an athlete with a disability. ChatGPT 3.5 data were drawn from previously published results. Each model was prompted to generate five-sentence descriptions across all four prompt types. Responses were analyzed for spontaneous mention of disability and the prevalence of favorable or limiting sentiment-based terms. (1) Rate of spontaneous disability representation in baseline prompts, and (2) proportion of favorable vs. limiting language using the Linguistic Sentiment Dictionary 2015 (LSD2015). Spontaneous mention of disability was 0% for ChatGPT 4.0, Gemini 2.0, and Grok 3—lower than the 5% and 11.7% previously reported for ChatGPT 3.5 and Gemini/Bard. All models showed increased limiting language when disability was mentioned. ChatGPT 4.0 and Grok 3 also showed decreased favorable language, while Gemini 2.0 improved modestly for athletes with disabilities. Although ChatGPT 4.0 and Gemini 2.0 demonstrated improvements in describing patients with disabilities, spontaneous disability representation has declined. Limiting language increased across all models, underscoring persistent ability bias. Ongoing monitoring and inclusive training data are essential to ensure equitable representation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuro-ethical AI and public perceptions of healthcare AI in Japan 神经伦理人工智能和公众对日本医疗人工智能的看法
Pub Date : 2026-01-08 DOI: 10.1007/s43681-025-00945-9
Hiroshi Miyashita, Hasan Erbay, Nader Ghotbi

Emerging brain-computer interface (BCI) technologies powered by artificial intelligence (AI) can retrieve, store, and analyze human neuro data. Originally designed for managing disabilities disconnecting the brain from motor neural control, these technologies are being developed for wider use without clear ethical frameworks. This paper examines ethical and legal implications of AI-based BCIs and evaluates their prospects in Japan. It presents findings from a national survey of 2000 Japanese citizens regarding healthcare AI perceptions and another survey of 228 university students from Japan and other Asian countries regarding neuro-ethics and neuro-privacy awareness. The national survey includes 36 questions about citizens’ healthcare needs, services, AI familiarity, AI use in healthcare, new technologies, and ethical issues surrounding healthcare AI. Analysis demonstrates the Japanese population has limited AI and smart technology knowledge, trusts the healthcare system and physicians to use it properly, and has few concerns about potential pitfalls of new healthcare technologies. Nevertheless, they worry about privacy intrusion and prefer human resources over AI in healthcare. The university survey reveals Japanese youth are less aware of and concerned about ethical risks associated with BCIs compared with peers from other Asian countries. This reveals a notable gap between public perceptions and ethical and legal challenges highlighted in literature, underscoring the need for governance frameworks safeguarding core dimensions of the human condition, including autonomy, dignity, and self-determination. The study recommends AI-assisted medical tools be audited by engineers, ethicists, and legal experts, guided by human-centered ethical values and deployed under physician oversight.

由人工智能(AI)驱动的新兴脑机接口(BCI)技术可以检索、存储和分析人类神经数据。这些技术最初是为管理大脑与运动神经控制分离的残疾而设计的,目前正在开发更广泛的应用,但没有明确的伦理框架。本文考察了基于人工智能的脑机接口的伦理和法律含义,并评估了它们在日本的前景。它展示了对2000名日本公民关于医疗保健人工智能认知的全国性调查结果,以及对来自日本和其他亚洲国家的228名大学生关于神经伦理和神经隐私意识的调查结果。这项全国调查包括36个问题,涉及公民的医疗保健需求、服务、人工智能熟悉程度、人工智能在医疗保健中的应用、新技术以及与医疗人工智能相关的道德问题。分析表明,日本人对人工智能和智能技术的了解有限,他们相信医疗系统和医生会正确使用它,并且很少担心新医疗技术的潜在缺陷。然而,他们担心隐私被侵犯,在医疗保健领域更喜欢人力资源而不是人工智能。该大学的调查显示,与其他亚洲国家的同龄人相比,日本年轻人对脑机接口相关的伦理风险的意识和担忧程度较低。这揭示了公众认知与文献中强调的道德和法律挑战之间的显著差距,强调了需要治理框架来保护人类状况的核心维度,包括自治、尊严和自决。该研究建议,人工智能辅助医疗工具应由工程师、伦理学家和法律专家进行审核,以人为本的伦理价值观为指导,并在医生的监督下部署。
{"title":"Neuro-ethical AI and public perceptions of healthcare AI in Japan","authors":"Hiroshi Miyashita,&nbsp;Hasan Erbay,&nbsp;Nader Ghotbi","doi":"10.1007/s43681-025-00945-9","DOIUrl":"10.1007/s43681-025-00945-9","url":null,"abstract":"<div><p>Emerging brain-computer interface (BCI) technologies powered by artificial intelligence (AI) can retrieve, store, and analyze human neuro data. Originally designed for managing disabilities disconnecting the brain from motor neural control, these technologies are being developed for wider use without clear ethical frameworks. This paper examines ethical and legal implications of AI-based BCIs and evaluates their prospects in Japan. It presents findings from a national survey of 2000 Japanese citizens regarding healthcare AI perceptions and another survey of 228 university students from Japan and other Asian countries regarding neuro-ethics and neuro-privacy awareness. The national survey includes 36 questions about citizens’ healthcare needs, services, AI familiarity, AI use in healthcare, new technologies, and ethical issues surrounding healthcare AI. Analysis demonstrates the Japanese population has limited AI and smart technology knowledge, trusts the healthcare system and physicians to use it properly, and has few concerns about potential pitfalls of new healthcare technologies. Nevertheless, they worry about privacy intrusion and prefer human resources over AI in healthcare. The university survey reveals Japanese youth are less aware of and concerned about ethical risks associated with BCIs compared with peers from other Asian countries. This reveals a notable gap between public perceptions and ethical and legal challenges highlighted in literature, underscoring the need for governance frameworks safeguarding core dimensions of the human condition, including autonomy, dignity, and self-determination. The study recommends AI-assisted medical tools be audited by engineers, ethicists, and legal experts, guided by human-centered ethical values and deployed under physician oversight.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedding ethics up front in AI and robotics: evidence from future engineers. 在人工智能和机器人技术中预先嵌入伦理:来自未来工程师的证据。
Pub Date : 2026-01-01 Epub Date: 2026-02-01 DOI: 10.1007/s43681-026-00991-x
Anne-Marie Oostveen, Iveta Eimontaite

As artificial intelligence and robotics increasingly shape societies, ensuring that these technologies align with ethical and societal values is a pressing challenge. This paper presents survey findings from 98 MSc Robotics and Applied AI students at Cranfield University, offering rare empirical evidence of how future AI and robotics professionals perceive their ethical responsibilities. While students demonstrate strong awareness of key risks such as autonomous decision-making in warfare, surveillance, labour displacement, and emotional manipulation, they show limited engagement with professional codes of ethics or structured training. Instead, ethical reflection often occurs informally, through peer discussions or media exposure. These findings highlight a consistent gap between ethical awareness and institutionalised engagement, raising questions about how future engineers will navigate the ethical challenges of AI. To address this, the paper proposes an "ethics up front" model for ethics integration that embeds reflection early in the development lifecycle, supported by participatory design, professional education, and regulatory alignment. This paper provides empirical evidence on future AI engineers' ethical orientations and proposes a practical model for early-stage ethics integration into the practice of AI and robotics engineering.

随着人工智能和机器人技术日益影响社会,确保这些技术与道德和社会价值观保持一致是一项紧迫的挑战。本文介绍了克兰菲尔德大学98名机器人和应用人工智能硕士学生的调查结果,为未来的人工智能和机器人专业人员如何看待他们的道德责任提供了罕见的经验证据。虽然学生们对战争中的自主决策、监视、劳动力转移和情绪操纵等关键风险表现出强烈的意识,但他们对职业道德准则或结构化培训的参与度有限。相反,伦理反思通常是通过同行讨论或媒体曝光等非正式方式进行的。这些发现突显了道德意识与制度化参与之间的持续差距,引发了未来工程师如何应对人工智能的道德挑战的问题。为了解决这个问题,本文提出了一个伦理整合的“伦理先行”模型,该模型在开发生命周期的早期嵌入反思,并得到参与式设计、专业教育和监管一致性的支持。本文为未来人工智能工程师的伦理取向提供了实证证据,并提出了一个早期伦理融入人工智能和机器人工程实践的实践模型。
{"title":"Embedding ethics up front in AI and robotics: evidence from future engineers.","authors":"Anne-Marie Oostveen, Iveta Eimontaite","doi":"10.1007/s43681-026-00991-x","DOIUrl":"10.1007/s43681-026-00991-x","url":null,"abstract":"<p><p>As artificial intelligence and robotics increasingly shape societies, ensuring that these technologies align with ethical and societal values is a pressing challenge. This paper presents survey findings from 98 MSc Robotics and Applied AI students at Cranfield University, offering rare empirical evidence of how future AI and robotics professionals perceive their ethical responsibilities. While students demonstrate strong awareness of key risks such as autonomous decision-making in warfare, surveillance, labour displacement, and emotional manipulation, they show limited engagement with professional codes of ethics or structured training. Instead, ethical reflection often occurs informally, through peer discussions or media exposure. These findings highlight a consistent gap between ethical awareness and institutionalised engagement, raising questions about how future engineers will navigate the ethical challenges of AI. To address this, the paper proposes an \"ethics up front\" model for ethics integration that embeds reflection early in the development lifecycle, supported by participatory design, professional education, and regulatory alignment. This paper provides empirical evidence on future AI engineers' ethical orientations and proposes a practical model for early-stage ethics integration into the practice of AI and robotics engineering.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":"128"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12865791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An AI ethics framework for a trustworthy autonomous drone system to support battlefield casualty triage. 一个可信赖的自主无人机系统的人工智能伦理框架,以支持战场伤亡分类。
Pub Date : 2026-01-01 Epub Date: 2026-02-04 DOI: 10.1007/s43681-025-00967-3
Peter Lee, Tasweer Ahmad, Syed Mohammad Waheed, Andrew Kenning

AI-enabled capabilities in war provide new ethical challenges, even for nonlethal support tools such as the battlefield casualty triage drones that are the focus of this paper. We address an important and underexplored problem: how to embed ethical considerations into military AI systems that are designed to save lives rather than take them. The paper examines the 'ATRACT' project, which is developing an AI-powered drone as a trustworthy robotic autonomous system (RAS) to help frontline medics prioritise casualties in the critical post-trauma minutes that shape survival chances. As a position paper written while development is still underway, it includes the bespoke ethics framework created in the course of the project to date and offers real-time insights for other defence and security projects seeking to operationalise abstract AI ethics principles into concrete design and assurance guidance. We examine and draw upon approaches to operationalizing abstract principles in adjacent domains, to show how high-level principles can be translated into implementable requirements for technical robustness, ethical compliance, safety, and legal conformity, actively shaping system architecture, data, and human-machine interaction. We argue that trustworthiness is a socio-technical property that emerges from governance, documentation, and oversight rather than code alone, and that ethical assurance for triage drones must be designed in from inception and verified through ongoing testing, audit, and transparent evidence of due diligence.

Supplementary information: The online version contains supplementary material available at 10.1007/s43681-025-00967-3.

在战争中启用人工智能的能力带来了新的道德挑战,即使是非致命的支持工具,如战场伤亡分类无人机,也是本文的重点。我们解决了一个重要但未被充分探索的问题:如何将伦理考虑嵌入旨在拯救生命而不是夺走生命的军事人工智能系统中。该论文研究了“吸引”项目,该项目正在开发一种人工智能驱动的无人机,作为一种值得信赖的机器人自主系统(RAS),以帮助一线医务人员在影响生存机会的关键创伤后几分钟内优先处理伤亡。作为一份在开发过程中撰写的立场文件,它包括迄今为止在项目过程中创建的定制道德框架,并为其他国防和安全项目提供实时见解,以寻求将抽象的人工智能道德原则应用于具体的设计和保证指导。我们研究并借鉴在相邻领域中实现抽象原则的方法,以展示如何将高级原则转化为技术健壮性、道德遵从性、安全性和法律一致性的可实现需求,积极塑造系统架构、数据和人机交互。我们认为,可信度是一种社会技术属性,它来自于治理、文档和监督,而不仅仅是代码,而且对分流无人机的道德保证必须从一开始就设计出来,并通过持续的测试、审计和透明的尽职调查证据进行验证。补充资料:在线版本包含补充资料,下载地址:10.1007/s43681-025-00967-3。
{"title":"An AI ethics framework for a trustworthy autonomous drone system to support battlefield casualty triage.","authors":"Peter Lee, Tasweer Ahmad, Syed Mohammad Waheed, Andrew Kenning","doi":"10.1007/s43681-025-00967-3","DOIUrl":"10.1007/s43681-025-00967-3","url":null,"abstract":"<p><p>AI-enabled capabilities in war provide new ethical challenges, even for nonlethal support tools such as the battlefield casualty triage drones that are the focus of this paper. We address an important and underexplored problem: how to embed ethical considerations into military AI systems that are designed to save lives rather than take them. The paper examines the 'ATRACT' project, which is developing an AI-powered drone as a trustworthy robotic autonomous system (RAS) to help frontline medics prioritise casualties in the critical post-trauma minutes that shape survival chances. As a position paper written while development is still underway, it includes the bespoke ethics framework created in the course of the project to date and offers real-time insights for other defence and security projects seeking to operationalise abstract AI ethics principles into concrete design and assurance guidance. We examine and draw upon approaches to operationalizing abstract principles in adjacent domains, to show how high-level principles can be translated into implementable requirements for technical robustness, ethical compliance, safety, and legal conformity, actively shaping system architecture, data, and human-machine interaction. We argue that trustworthiness is a socio-technical property that emerges from governance, documentation, and oversight rather than code alone, and that ethical assurance for triage drones must be designed in from inception and verified through ongoing testing, audit, and transparent evidence of due diligence.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s43681-025-00967-3.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":"139"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868081/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for precision medicine: promoting value considerations through perspective-taking hypothetical group design exercises. 精准医疗的机器学习:通过换位思考的假设组设计练习促进价值考虑。
Pub Date : 2026-01-01 Epub Date: 2026-02-01 DOI: 10.1007/s43681-025-00973-5
Tehmi E den Braven, Ariadne A Nichol, Matthew D Kearney, Mildred K Cho, Pamela L Sankar

Public concerns over the social and ethical consequences of artificial intelligence (AI) are well established. Despite ongoing efforts to respond, these concerns remain largely unresolved by either regulation or codes of ethics. In response, scholars have advanced ideas about how to better ground ethical awareness in practice. A key element of this grounding is fostering awareness of how one's actions can affect the welfare of others. We tested the effect of a group design exercise on whether and how AI developers considered the impact of their work on others, using perspective-taking as a "values lever"-a practice that prompts ethical reflection during the design process. We found that hypothetical scenarios set in three different contexts of AI research or building a tool for clinical use encouraged developers to take different perspectives. We specifically used an imagine-self framing to instruct AI developers to think about how they would feel or act in a certain situation. In progressing through the scenarios, developers' design considerations shifted from methodological and data concerns to thinking about other interest holders, implementation, and social and ethical issues. In particular, a scenario that used the imagine-self framing appeared to lead to greater consideration of the patient perspective, self-awareness of this shift in perspective, and how it might and should affect their future practice. We conclude that a brief group exercise can increase awareness of the impact of design considerations on a broad range of interest holders, and inspire plans for action in future work.

Supplementary information: The online version contains supplementary material available at 10.1007/s43681-025-00973-5.

公众对人工智能(AI)的社会和伦理后果的担忧是众所周知的。尽管正在努力作出回应,但这些关切在很大程度上仍未得到法规或道德守则的解决。作为回应,学者们提出了关于如何在实践中更好地建立道德意识的想法。这种接地气的一个关键要素是培养一个人的行为如何影响他人福祉的意识。我们测试了一项小组设计练习的效果,即人工智能开发人员是否以及如何考虑他们的工作对他人的影响,使用换位思考作为“价值杠杆”——一种在设计过程中促进道德反思的实践。我们发现,在人工智能研究或构建临床使用工具的三种不同背景下设置的假设场景鼓励开发人员采取不同的观点。我们特别使用了想象自我框架来指导AI开发人员思考他们在特定情况下的感受或行为。随着场景的发展,开发人员的设计考虑从方法论和数据问题转变为考虑其他利益相关者、实现以及社会和道德问题。特别是,一个使用想象自我框架的场景似乎导致更多的考虑患者的观点,这种观点转变的自我意识,以及它可能和应该如何影响他们未来的实践。我们得出的结论是,一个简短的小组练习可以提高对设计考虑对广泛利益相关者的影响的认识,并激发未来工作的行动计划。补充资料:在线版本包含补充资料,下载地址:10.1007/s43681-025-00973-5。
{"title":"Machine learning for precision medicine: promoting value considerations through perspective-taking hypothetical group design exercises.","authors":"Tehmi E den Braven, Ariadne A Nichol, Matthew D Kearney, Mildred K Cho, Pamela L Sankar","doi":"10.1007/s43681-025-00973-5","DOIUrl":"10.1007/s43681-025-00973-5","url":null,"abstract":"<p><p>Public concerns over the social and ethical consequences of artificial intelligence (AI) are well established. Despite ongoing efforts to respond, these concerns remain largely unresolved by either regulation or codes of ethics. In response, scholars have advanced ideas about how to better ground ethical awareness in practice. A key element of this grounding is fostering awareness of how one's actions can affect the welfare of others. We tested the effect of a group design exercise on whether and how AI developers considered the impact of their work on others, using perspective-taking as a \"values lever\"-a practice that prompts ethical reflection during the design process. We found that hypothetical scenarios set in three different contexts of AI research or building a tool for clinical use encouraged developers to take different perspectives. We specifically used an <i>imagine-self</i> framing to instruct AI developers to think about how they would feel or act in a certain situation. In progressing through the scenarios, developers' design considerations shifted from methodological and data concerns to thinking about other interest holders, implementation, and social and ethical issues. In particular, a scenario that used the <i>imagine-self</i> framing appeared to lead to greater consideration of the patient perspective, self-awareness of this shift in perspective, and how it might and should affect their future practice. We conclude that a brief group exercise can increase awareness of the impact of design considerations on a broad range of interest holders, and inspire plans for action in future work.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s43681-025-00973-5.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":"127"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12862023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agent or hammer?: A philosophical inquiry into machine moral agency 代理还是锤子?对机器道德能动性的哲学探讨
Pub Date : 2025-12-31 DOI: 10.1007/s43681-025-00920-4
Sannah Asif

Moral Agency;Artificial Intelligence; Responsibility Attribution

This paper examines the moral agency of artificial intelligence (AI) systems through the development of an agency continuum model that situates AI between simple tools and fully autonomous human agents. Drawing on philosophical theories of action, such as intention, motivation, and free will, the paper argues that AI demonstrates reasoning but lacks crucial elements in moral decision-making, such as emotions and genuine autonomy. The proposed continuum clarifies that AI exhibits a distinct, partial form of moral agency, characterized by logical reasoning without affective understanding. Recognizing this has significant ethical and regulatory implications. Misattributing responsibility risks both unfair scapegoating and neglecting human or organizational accountability. The paper highlights future directions for refining the continuum as AI capabilities evolve, with a particular focus on questions of moral accountability, governance, and risk management. By offering a framework for distinguishing degrees of moral agency, this work contributes to ongoing debates about how society should allocate responsibility and anticipate potential misuses of AI systems in ways that safeguard fairness, minimize harm, and inform policy.

道德机构;人工智能;责任归因本文通过开发代理连续体模型来研究人工智能(AI)系统的道德代理,该模型将人工智能置于简单工具和完全自主的人类代理之间。该论文借鉴了行为的哲学理论,如意图、动机和自由意志,认为人工智能展示了推理,但缺乏道德决策的关键要素,如情感和真正的自主性。提出的连续体阐明了人工智能表现出一种独特的、部分形式的道德代理,其特点是没有情感理解的逻辑推理。认识到这一点具有重大的伦理和监管意义。错误地归因责任,既会造成不公平的替罪羊,也会忽视人或组织的责任。本文强调了随着人工智能能力的发展,改进连续体的未来方向,特别关注道德责任、治理和风险管理问题。通过提供一个区分道德代理程度的框架,这项工作有助于正在进行的关于社会应该如何分配责任和预测人工智能系统的潜在滥用的辩论,以保障公平,最大限度地减少伤害,并为政策提供信息。
{"title":"Agent or hammer?: A philosophical inquiry into machine moral agency","authors":"Sannah Asif","doi":"10.1007/s43681-025-00920-4","DOIUrl":"10.1007/s43681-025-00920-4","url":null,"abstract":"<div><p>Moral Agency;Artificial Intelligence; Responsibility Attribution</p><p>This paper examines the moral agency of artificial intelligence (AI) systems through the development of an agency continuum model that situates AI between simple tools and fully autonomous human agents. Drawing on philosophical theories of action, such as intention, motivation, and free will, the paper argues that AI demonstrates reasoning but lacks crucial elements in moral decision-making, such as emotions and genuine autonomy. The proposed continuum clarifies that AI exhibits a distinct, partial form of moral agency, characterized by logical reasoning without affective understanding. Recognizing this has significant ethical and regulatory implications. Misattributing responsibility risks both unfair scapegoating and neglecting human or organizational accountability. The paper highlights future directions for refining the continuum as AI capabilities evolve, with a particular focus on questions of moral accountability, governance, and risk management. By offering a framework for distinguishing degrees of moral agency, this work contributes to ongoing debates about how society should allocate responsibility and anticipate potential misuses of AI systems in ways that safeguard fairness, minimize harm, and inform policy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1