首页 > 最新文献

AI and ethics最新文献

英文 中文
Ethical and equitable approaches in AI for vector-borne disease management 媒介传播疾病管理人工智能中的伦理和公平方法
Pub Date : 2025-12-22 DOI: 10.1007/s43681-025-00933-z
Jessica J. Williams, Ioanna Angelidou, Maria Cholvi, Perparim Kadriaj, Angeliki F. Martinou, Nadejda Mocreac, Song-Quan Ong, Ferhat Sadak, Jiří Skuhrovec, Enkelejda Velo, Branimir K. Hackenberger

Artificial intelligence (AI) is increasingly being incorporated into public health strategies for vector-borne disease (VBD) management, offering several advances in surveillance, prediction, and control. At the same time however, the integration of AI technologies raises critical ethical and equity concerns, particularly in regions disproportionately affected by VBDs. Here, we explore seven key ethical and equitable challenges in the use of AI for VBD management: (1) data quality and representativeness, (2) risk of discrimination and inequality reinforcement, (3) transparency and reproducibility, (4) privacy and data protection, (5) cybersecurity, (6) fair and equitable benefit-sharing, and (7) environmental considerations. Within each of these challenges, we highlight how unaddressed ethical and equity issues can exacerbate health disparities and undermine public trust. We then propose actionable pathways forward, including inclusive data governance, transparency-enhancing tools, and environmentally-conscious AI practices. By highlighting how accounting for these ethical and equity concerns during AI development and deployment can further progress towards the United Nations Sustainable Development Goals, we advocate for a more responsible and inclusive approach to AI in VBD management.

人工智能(AI)越来越多地被纳入媒介传播疾病(VBD)管理的公共卫生战略,在监测、预测和控制方面取得了若干进展。然而,与此同时,人工智能技术的整合引发了关键的伦理和公平问题,特别是在受生物多样性开发计划署严重影响的地区。在这里,我们探讨了人工智能在VBD管理中的七个关键伦理和公平挑战:(1)数据质量和代表性,(2)歧视和不平等加剧的风险,(3)透明度和可重复性,(4)隐私和数据保护,(5)网络安全,(6)公平和公平的利益分享,以及(7)环境考虑。在每一项挑战中,我们都强调未解决的道德和公平问题如何加剧卫生差距并破坏公众信任。然后,我们提出了可行的前进途径,包括包容性数据治理、提高透明度的工具和具有环保意识的人工智能实践。通过强调在人工智能开发和部署过程中如何考虑这些道德和公平问题,可以进一步推进联合国可持续发展目标,我们倡导在VBD管理中采用更负责任和包容的人工智能方法。
{"title":"Ethical and equitable approaches in AI for vector-borne disease management","authors":"Jessica J. Williams,&nbsp;Ioanna Angelidou,&nbsp;Maria Cholvi,&nbsp;Perparim Kadriaj,&nbsp;Angeliki F. Martinou,&nbsp;Nadejda Mocreac,&nbsp;Song-Quan Ong,&nbsp;Ferhat Sadak,&nbsp;Jiří Skuhrovec,&nbsp;Enkelejda Velo,&nbsp;Branimir K. Hackenberger","doi":"10.1007/s43681-025-00933-z","DOIUrl":"10.1007/s43681-025-00933-z","url":null,"abstract":"<div><p>Artificial intelligence (AI) is increasingly being incorporated into public health strategies for vector-borne disease (VBD) management, offering several advances in surveillance, prediction, and control. At the same time however, the integration of AI technologies raises critical ethical and equity concerns, particularly in regions disproportionately affected by VBDs. Here, we explore seven key ethical and equitable challenges in the use of AI for VBD management: (1) data quality and representativeness, (2) risk of discrimination and inequality reinforcement, (3) transparency and reproducibility, (4) privacy and data protection, (5) cybersecurity, (6) fair and equitable benefit-sharing, and (7) environmental considerations. Within each of these challenges, we highlight how unaddressed ethical and equity issues can exacerbate health disparities and undermine public trust. We then propose actionable pathways forward, including inclusive data governance, transparency-enhancing tools, and environmentally-conscious AI practices. By highlighting how accounting for these ethical and equity concerns during AI development and deployment can further progress towards the United Nations Sustainable Development Goals, we advocate for a more responsible and inclusive approach to AI in VBD management.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models for clinical trials in the Global South: opportunities and ethical challenges 南半球临床试验的大型语言模型:机遇与伦理挑战
Pub Date : 2025-12-21 DOI: 10.1007/s43681-025-00943-x
Hafiz Ahmed

Large language models (LLMs) show promise for improving clinical trials in wealthy countries but remain underexplored in low- and middle-income countries (LMICs), where healthcare infrastructure is weaker and resources limited. This article explores opportunities for LLM integration and addresses ethical challenges in LMIC clinical trials through a review of recent literature (2024–2025) on LLMs in healthcare and clinical research, examining adaptation potential for LMICs using thematic analysis to identify ethical issues specific to LMIC contexts, including data control, fairness, and sustainability. LLMs can accelerate protocol development, improve multilingual patient recruitment, streamline regulatory processes, and address data gaps through synthetic records; however, implementation raises concerns about data privacy, community representation, AI transparency, and technological dependence on foreign platforms. While LLMs can enhance clinical trial efficiency and inclusivity in LMICs, successful integration requires locally-adapted models, community-centered ethical oversight, and regional partnerships, with thoughtful implementation potentially democratizing healthcare innovation benefits across Global South populations.

大型语言模型(LLMs)有望改善富裕国家的临床试验,但在医疗基础设施薄弱且资源有限的中低收入国家(LMICs)仍未得到充分探索。本文通过对医疗保健和临床研究中法学硕士的最新文献(2024-2025)的回顾,探讨了LLM整合的机会,并解决了LMIC临床试验中的伦理挑战,使用主题分析来检查LMIC的适应潜力,以确定特定于LMIC背景的伦理问题,包括数据控制、公平和可持续性。法学硕士可以加速方案开发,改善多语言患者招募,简化监管流程,并通过合成记录解决数据缺口;然而,其实施引发了对数据隐私、社区代表性、人工智能透明度以及对外国平台的技术依赖等方面的担忧。虽然法学硕士可以提高中低收入国家的临床试验效率和包容性,但成功的整合需要适合当地的模式、以社区为中心的道德监督和区域伙伴关系,而经过深思熟虑的实施可能会使全球南方人口的医疗保健创新民主化。
{"title":"Large language models for clinical trials in the Global South: opportunities and ethical challenges","authors":"Hafiz Ahmed","doi":"10.1007/s43681-025-00943-x","DOIUrl":"10.1007/s43681-025-00943-x","url":null,"abstract":"<div><p>Large language models (LLMs) show promise for improving clinical trials in wealthy countries but remain underexplored in low- and middle-income countries (LMICs), where healthcare infrastructure is weaker and resources limited. This article explores opportunities for LLM integration and addresses ethical challenges in LMIC clinical trials through a review of recent literature (2024–2025) on LLMs in healthcare and clinical research, examining adaptation potential for LMICs using thematic analysis to identify ethical issues specific to LMIC contexts, including data control, fairness, and sustainability. LLMs can accelerate protocol development, improve multilingual patient recruitment, streamline regulatory processes, and address data gaps through synthetic records; however, implementation raises concerns about data privacy, community representation, AI transparency, and technological dependence on foreign platforms. While LLMs can enhance clinical trial efficiency and inclusivity in LMICs, successful integration requires locally-adapted models, community-centered ethical oversight, and regional partnerships, with thoughtful implementation potentially democratizing healthcare innovation benefits across Global South populations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00943-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reasons-based artificial agents 基于理性的人工智能体
Pub Date : 2025-12-21 DOI: 10.1007/s43681-025-00932-0
Federico L. G. Faroldi

Can artificial agents be moral agents themselves? In this paper, I develop the foundations of reasons-based artificial agents, and I argue that it is superior to some alternatives to AI ethics. I focus on reinforcement-learning agents for concreteness, discussing some methodological issues along the way.

人工智能本身能成为道德的智能体吗?在本文中,我发展了基于理性的人工智能的基础,我认为它优于人工智能伦理的一些替代方案。我专注于具体的强化学习代理,一路上讨论了一些方法论问题。
{"title":"Reasons-based artificial agents","authors":"Federico L. G. Faroldi","doi":"10.1007/s43681-025-00932-0","DOIUrl":"10.1007/s43681-025-00932-0","url":null,"abstract":"<div>\u0000 \u0000 <p>Can artificial agents be moral agents <i>themselves</i>? In this paper, I develop the foundations of reasons-based artificial agents, and I argue that it is superior to some alternatives to AI ethics. I focus on reinforcement-learning agents for concreteness, discussing some methodological issues along the way.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00932-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why AI might not gain moral standing: lessons from animal ethics 为什么人工智能可能无法获得道德地位:来自动物伦理的教训
Pub Date : 2025-12-19 DOI: 10.1007/s43681-025-00919-x
Matti Wilks, Ali Ladak, Steve Loughnan

In recent years there has been a growing interest in the notion of AI consciousness—the question of whether artificial intelligences (AIs) can be conscious, and under what conditions this might emerge. This interest extends beyond academia to industry and the media. question of AI consciousness is underpinned by a moral question: should conscious AIs be granted moral standing? Emerging philosophical literature has begun to explore these ideas. We argue that these discussions neglect relevant psychological literature that can inform another element of this question—how our social and cognitive biases may impact our willingness to ascribe moral standing to AIs. In the current paper, we draw on the literature that examines moral consideration for non-human animals, and argue that similar biases limit moral standing for AI.

近年来,人们对人工智能意识的概念越来越感兴趣——人工智能(AI)是否可以有意识,以及在什么条件下可能会有意识。这种兴趣从学术界延伸到工业界和媒体。人工智能意识的问题是由一个道德问题支撑的:有意识的人工智能应该被赋予道德地位吗?新兴的哲学文献已经开始探索这些思想。我们认为,这些讨论忽视了相关的心理学文献,这些文献可以揭示这个问题的另一个因素——我们的社会和认知偏见如何影响我们将道德地位归因于人工智能的意愿。在当前的论文中,我们借鉴了研究非人类动物道德考虑的文献,并认为类似的偏见限制了人工智能的道德地位。
{"title":"Why AI might not gain moral standing: lessons from animal ethics","authors":"Matti Wilks,&nbsp;Ali Ladak,&nbsp;Steve Loughnan","doi":"10.1007/s43681-025-00919-x","DOIUrl":"10.1007/s43681-025-00919-x","url":null,"abstract":"<div><p>In recent years there has been a growing interest in the notion of AI consciousness—the question of whether artificial intelligences (AIs) can be conscious, and under what conditions this might emerge. This interest extends beyond academia to industry and the media. question of AI consciousness is underpinned by a moral question: should conscious AIs be granted moral standing? Emerging philosophical literature has begun to explore these ideas. We argue that these discussions neglect relevant psychological literature that can inform another element of this question—how our social and cognitive biases may impact our willingness to ascribe moral standing to AIs. In the current paper, we draw on the literature that examines moral consideration for non-human animals, and argue that similar biases limit moral standing for AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00919-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145779150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From optimization to inquiry: a Deweyan criterion for machine intelligence 从优化到查询:机器智能的杜威严准则
Pub Date : 2025-12-19 DOI: 10.1007/s43681-025-00911-5
Ghassan Abukar

Can artificial intelligence systems genuinely inquire—or is optimization the limit of their intelligence? This paper argues that the boundary between inquiry and optimization marks the defining criterion of machine intelligence. Drawing on John Dewey’s pragmatist epistemology, I contend that inquiry, in Dewey’s sense, is distinguished by a capacity optimization lacks, the power to reconstruct its own problem space. While optimization assumes a fixed goal and searches for efficient means, inquiry can recognize when its initial framing fails and reconstitute the very situation it seeks to resolve. This capacity for problem-reconstruction, I argue, is not merely a desirable feature of advanced AI but the defining characteristic of genuine intelligence. After reconstructing Dewey’s logic of inquiry, the paper demonstrates its computational implementability. It then examines the ontological status of machine inquiry through enactivist cognitive science (Varela, Thompson, and Rosch, 1991) and Dennett’s concept of “competence without comprehension” (2017). I engage critical objections from Dreyfus and Floridi, arguing that computational systems can perform genuine inquiry in a functionally and epistemically grounded sense. The paper concludes with ethical and existential implications concerning responsibility and alignment. The central claim is this: If we define intelligence as the capacity to navigate genuine novelty and indeterminacy, then AI systems must possess the ability to redefine the very tasks they confront—this is the Deweyan condition for machine inquiry, where intelligence entails reflective adaptation within social and ethical contexts. Today’s AI systems excel at solving predefined tasks yet remain incapable of the creative, reconstructive intelligence that defines genuine inquiry. If we seek to build truly intelligent machines, we must move beyond optimization.

人工智能系统真的可以查询吗?或者说,优化是它们智能的极限吗?探究和优化之间的界限标志着机器智能的定义标准。借鉴约翰·杜威的实用主义认识论,我认为,在杜威的意义上,探究的特点是缺乏能力优化,缺乏重建自己的问题空间的能力。当优化假设一个固定的目标并寻找有效的方法时,探究可以识别它最初的框架失败并重建它寻求解决的情况。我认为,这种重建问题的能力不仅是高级人工智能的理想特征,也是真正智能的定义特征。在重构了杜威的探究逻辑后,论证了其计算可实现性。然后,它通过积极的认知科学(Varela, Thompson, and Rosch, 1991)和Dennett的“没有理解的能力”概念(2017)来检验机器探究的本体论地位。我采纳了德雷福斯和弗洛里迪的批评意见,他们认为计算系统可以在功能和认识论的基础上进行真正的调查。本文总结了关于责任和一致性的伦理和存在意义。核心主张是这样的:如果我们将智能定义为驾驭真正的新奇和不确定性的能力,那么人工智能系统必须拥有重新定义它们所面临的任务的能力——这是机器查询的杜威条件,其中智能需要在社会和道德背景下进行反思适应。今天的人工智能系统擅长解决预先确定的任务,但仍然无法提供创造性的、重建性的智能,而这种智能定义了真正的探究。如果我们寻求制造真正的智能机器,我们必须超越优化。
{"title":"From optimization to inquiry: a Deweyan criterion for machine intelligence","authors":"Ghassan Abukar","doi":"10.1007/s43681-025-00911-5","DOIUrl":"10.1007/s43681-025-00911-5","url":null,"abstract":"<div><p>Can artificial intelligence systems genuinely inquire—or is optimization the limit of their intelligence? This paper argues that the boundary between inquiry and optimization marks the defining criterion of machine intelligence. Drawing on John Dewey’s pragmatist epistemology, I contend that inquiry, in Dewey’s sense, is distinguished by a capacity optimization lacks, the power to reconstruct its own problem space. While optimization assumes a fixed goal and searches for efficient means, inquiry can recognize when its initial framing fails and reconstitute the very situation it seeks to resolve. This capacity for problem-reconstruction, I argue, is not merely a desirable feature of advanced AI but the defining characteristic of genuine intelligence. After reconstructing Dewey’s logic of inquiry, the paper demonstrates its computational implementability. It then examines the ontological status of machine inquiry through enactivist cognitive science (Varela, Thompson, and Rosch, 1991) and Dennett’s concept of “competence without comprehension” (2017). I engage critical objections from Dreyfus and Floridi, arguing that computational systems can perform genuine inquiry in a functionally and epistemically grounded sense. The paper concludes with ethical and existential implications concerning responsibility and alignment. The central claim is this: If we define intelligence as the capacity to navigate genuine novelty and indeterminacy, then AI systems must possess the ability to redefine the very tasks they confront—this is the Deweyan condition for machine inquiry, where intelligence entails reflective adaptation within social and ethical contexts. Today’s AI systems excel at solving predefined tasks yet remain incapable of the creative, reconstructive intelligence that defines genuine inquiry. If we seek to build truly intelligent machines, we must move beyond optimization.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00911-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145779238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithms, language, and poetry: a phenomenological perspective 算法、语言和诗歌:现象学视角
Pub Date : 2025-12-19 DOI: 10.1007/s43681-025-00948-6
Daniel Turillazzi Fornés, Angelo Trotta

This paper examines the algorithmic formalization of language through a phenomenological lens, engaging Martin Heidegger and Maurice Merleau-Ponty in dialogue with contemporary large language models (LLMs) and related AI systems. Instead of treating computationally modeled language as a neutral medium for information transfer, we argue that both formal logic and data-driven models are historically specific crystallizations of a more primordial field of embodied expression. The idea of the ”unity of language” refers to the dynamic, historically situated field of expressive possibilities within which multiple linguistic systems — natural languages, formal calculi, code, poetic language — emerge, sediment, and transform. Drawing on Merleau-Ponty’s account of embodied speech, we reconstruct language as a living, self-renewing medium whose unity lies in its ongoing capacity to generate new sense. Heidegger’s analysis of technological ”enframing” (Gestell) and his reflections on ”traditional language” then allow us to interpret algorithmic conceptions of language as powerful, but critically informing of the existential risk of reducing speech to optimizable signals within the wider field of linguistic life. We confront these insights with current developments in AI, including LLMs, embodied AI, and enactive or 4E approaches to cognition. We conclude by sketching phenomenologically informed criteria for language technologies that respect expressive openness, relational depth, and the historicity of signifiers, and indicate how such criteria can orient debates in AI ethics.

本文通过现象学的视角考察了语言的算法形式化,让马丁·海德格尔和莫里斯·梅洛-庞蒂与当代大型语言模型(llm)和相关的人工智能系统进行了对话。我们没有将计算建模语言视为信息传递的中性媒介,而是认为形式逻辑和数据驱动模型都是体现表达的更原始领域的历史特定结晶。“语言统一性”的概念指的是动态的、历史上处于表达可能性的领域,在这个领域中,多种语言系统——自然语言、形式演示法、代码、诗歌语言——出现、沉淀和转化。根据梅洛-庞蒂对具身言语的描述,我们将语言重构为一种活的、自我更新的媒介,它的统一性在于它不断产生新感觉的能力。海德格尔对技术“框架”(Gestell)的分析和他对“传统语言”的反思使我们能够将语言的算法概念解释为强大的,但批判性地告知在更广泛的语言生活领域内将言语减少为可优化信号的存在风险。我们将这些见解与人工智能的当前发展相结合,包括法学硕士、具体化人工智能以及认知的主动或4E方法。最后,我们概述了语言技术的现象学标准,这些标准尊重表达的开放性、关系的深度和能指的历史性,并指出这些标准如何指导人工智能伦理中的辩论。
{"title":"Algorithms, language, and poetry: a phenomenological perspective","authors":"Daniel Turillazzi Fornés,&nbsp;Angelo Trotta","doi":"10.1007/s43681-025-00948-6","DOIUrl":"10.1007/s43681-025-00948-6","url":null,"abstract":"<div><p>This paper examines the algorithmic formalization of language through a phenomenological lens, engaging Martin Heidegger and Maurice Merleau-Ponty in dialogue with contemporary large language models (LLMs) and related AI systems. Instead of treating computationally modeled language as a neutral medium for information transfer, we argue that both formal logic and data-driven models are historically specific crystallizations of a more primordial field of embodied expression. The idea of the ”unity of language” refers to the dynamic, historically situated field of expressive possibilities within which multiple linguistic systems — natural languages, formal calculi, code, poetic language — emerge, sediment, and transform. Drawing on Merleau-Ponty’s account of embodied speech, we reconstruct language as a living, self-renewing medium whose unity lies in its ongoing capacity to generate new sense. Heidegger’s analysis of technological ”enframing” (Gestell) and his reflections on ”traditional language” then allow us to interpret algorithmic conceptions of language as powerful, but critically informing of the existential risk of reducing speech to optimizable signals within the wider field of linguistic life. We confront these insights with current developments in AI, including LLMs, embodied AI, and enactive or 4E approaches to cognition. We conclude by sketching phenomenologically informed criteria for language technologies that respect expressive openness, relational depth, and the historicity of signifiers, and indicate how such criteria can orient debates in AI ethics.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00948-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145779149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational AI agents in education: an umbrella review of current utilization, challenges, and future directions for ethical and responsible use 教育中的对话式人工智能代理:对当前使用、挑战以及道德和负责任使用的未来方向的概述
Pub Date : 2025-12-19 DOI: 10.1007/s43681-025-00916-0
Amrita Ganguly, Nafisa Mehjabin, Aqdas Malik, Aditya Johri

The use of Conversational AI (CAI) agents within education has seen a rise with the rapid integration of generative AI (GenAI). The generative ability of the application combined with conversational capabilities has enhanced the perceived and actual usefulness of CAI applications. Given this development, it is critical to undertake a comprehensive review to understand the actual application domains, challenges, and efforts within this area. A range of empirical studies as well as reviews have been undertaken in recent years, but the current understanding remains fragmented. To better understand the current state-of-the-art, current trends and future implications of CAI on education, we conducted an umbrella review (UR) to systematically synthesize findings from thirty-four review articles. Articles were collected through a search across five major databases. They were screened using predefined eligibility criteria focusing on CAI agents used across educational domains and contexts. The PRISMA framework for transparent reporting is followed throughout the process and a thematic analysis has been undertaken to analyze the data. The results show that CAI utilization is concentrated in pedagogical applications such as teaching support, psychological engagement, and metacognitive development, while administrative functions, research assistance, and specialized training remain underdeveloped. Technical limitations and concerns with educational impact dominate discussions. Ethically, human-AI relationship concerns persist across all CAI generations, while academic integrity and data privacy represent emerging areas of concern. The review reveals gaps in CAI frameworks: lack of end-to-end design guidance, weak CAI specific usability methods, unclear pedagogical guidance and classroom implementation strategies, and limited AI literacy support. The article concludes by proposing a roadmap for ethical CAI implementation in education and identifying priority areas for future research.

随着生成式人工智能(GenAI)的快速集成,在教育中使用会话式人工智能(CAI)代理的情况有所增加。应用程序的生成能力与会话能力相结合,增强了CAI应用程序的感知和实际有用性。考虑到这一发展,进行全面的回顾以理解实际的应用领域、挑战和在这一领域中的努力是至关重要的。近年来进行了一系列的实证研究和评论,但目前的理解仍然是碎片化的。为了更好地了解CAI对教育的现状、趋势和未来影响,我们进行了一项总括性综述(UR),系统地综合了34篇综述文章的研究结果。文章是通过对五个主要数据库的搜索收集的。他们使用预先定义的资格标准进行筛选,重点关注跨教育领域和上下文使用的CAI代理。在整个过程中都遵循透明报告的PRISMA框架,并进行了专题分析以分析数据。结果表明,CAI的利用主要集中在教学支持、心理参与和元认知发展等教学应用方面,而行政功能、研究辅助和专业培训等方面的利用不足。技术限制和对教育影响的关注主导了讨论。从道德上讲,人类与人工智能的关系问题持续存在于所有人工智能时代,而学术诚信和数据隐私则是新兴的关注领域。该综述揭示了CAI框架的不足:缺乏端到端设计指导,CAI特定可用性方法薄弱,教学指导和课堂实施策略不明确,人工智能素养支持有限。文章最后提出了伦理CAI在教育中实施的路线图,并确定了未来研究的重点领域。
{"title":"Conversational AI agents in education: an umbrella review of current utilization, challenges, and future directions for ethical and responsible use","authors":"Amrita Ganguly,&nbsp;Nafisa Mehjabin,&nbsp;Aqdas Malik,&nbsp;Aditya Johri","doi":"10.1007/s43681-025-00916-0","DOIUrl":"10.1007/s43681-025-00916-0","url":null,"abstract":"<div><p>The use of Conversational AI (CAI) agents within education has seen a rise with the rapid integration of generative AI (GenAI). The generative ability of the application combined with conversational capabilities has enhanced the perceived and actual usefulness of CAI applications. Given this development, it is critical to undertake a comprehensive review to understand the actual application domains, challenges, and efforts within this area. A range of empirical studies as well as reviews have been undertaken in recent years, but the current understanding remains fragmented. To better understand the current state-of-the-art, current trends and future implications of CAI on education, we conducted an umbrella review (UR) to systematically synthesize findings from thirty-four review articles. Articles were collected through a search across five major databases. They were screened using predefined eligibility criteria focusing on CAI agents used across educational domains and contexts. The PRISMA framework for transparent reporting is followed throughout the process and a thematic analysis has been undertaken to analyze the data. The results show that CAI utilization is concentrated in pedagogical applications such as teaching support, psychological engagement, and metacognitive development, while administrative functions, research assistance, and specialized training remain underdeveloped. Technical limitations and concerns with educational impact dominate discussions. Ethically, human-AI relationship concerns persist across all CAI generations, while academic integrity and data privacy represent emerging areas of concern. The review reveals gaps in CAI frameworks: lack of end-to-end design guidance, weak CAI specific usability methods, unclear pedagogical guidance and classroom implementation strategies, and limited AI literacy support. The article concludes by proposing a roadmap for ethical CAI implementation in education and identifying priority areas for future research.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00916-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145779239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indigenous ethics and artificial intelligence 本土伦理和人工智能
Pub Date : 2025-12-17 DOI: 10.1007/s43681-025-00879-2
Milton Maldonado, Daniela Córdova-Pintado

This article examines Indigenous ethics of reciprocity as a normative and epistemological framework that challenges Western linear conceptions of time, economy, and social relations. Drawing from Marcel Mauss’s theory of the gift and Andean traditions, reciprocity is conceived not as mere exchange but as a vital principle of survival that structures human, communal, and cosmic relations. The acts of giving, receiving, and returning are understood as sacred obligations that guarantee continuity, balance, and mutual recognition across generations. Andean temporality, cyclical and rooted in natural and agricultural rhythms, situates reciprocity beyond economic utility and embeds it within cultural and cosmological orders. Historical encounters, such as colonial Christianity, illustrate the adaptive and inclusive nature of reciprocity, which facilitated intercultural coexistence and survival. Within this horizon, wealth is defined as relational abundance, measured by kinship networks and the capacity to fulfill communal obligations. Practices such as ayni (reciprocal labor), minka (collective work), and randi-randi (generalized reciprocity) embody this ethical system. Reciprocity thus emerges as both epistemology and moral system: a categorical imperative that governs enduring relations among humans, nature, and the cosmos. Returning a gift is not optional but a universal moral principle grounded in respect for Indigenous law, often misunderstood within Western frameworks. Despite growing critiques of Artificial Intelligence, non-Western epistemologies remain excluded. AI redefines truth as technical construction, suppressing subjectivity, dissent, and plurality, while fostering algorithmic obedience that undermines political imagination. The ethics of reciprocity offers a counterpoint, demanding the recovery of relational responsibility as a moral and political principle to guide the development of more just, situated, and human technologies.

本文将互惠伦理作为一种规范和认识论框架,对西方时间、经济和社会关系的线性概念提出挑战。从Marcel Mauss的礼物理论和安第斯传统中,互惠不仅仅是交换,而且是构建人类、社区和宇宙关系的重要生存原则。给予、接受和回报的行为被理解为神圣的义务,保证了世代之间的连续性、平衡和相互认可。安第斯山脉的时间性是周期性的,植根于自然和农业节奏,将互惠置于经济效用之外,并将其嵌入文化和宇宙秩序中。历史上的相遇,如殖民时期的基督教,说明了互惠的适应性和包容性,促进了跨文化共存和生存。在这个范围内,财富被定义为关系丰富,由亲属网络和履行公共义务的能力来衡量。诸如ayni(互惠劳动)、minka(集体劳动)和randi-randi(广义互惠)等实践体现了这种道德体系。因此,互惠作为认识论和道德体系出现了:一种支配人类、自然和宇宙之间持久关系的绝对命令。回礼不是可有可无的,而是建立在尊重土著法律基础上的普遍道德原则,这在西方框架内经常被误解。尽管对人工智能的批评越来越多,但非西方认识论仍然被排除在外。人工智能将真理重新定义为技术建构,压制主观性、异议和多元性,同时培养算法服从,破坏政治想象。互惠的伦理提供了一个相反的观点,它要求恢复关系责任作为一种道德和政治原则,以指导更公正、更客观、更人性化的技术的发展。
{"title":"Indigenous ethics and artificial intelligence","authors":"Milton Maldonado,&nbsp;Daniela Córdova-Pintado","doi":"10.1007/s43681-025-00879-2","DOIUrl":"10.1007/s43681-025-00879-2","url":null,"abstract":"<div><p>This article examines Indigenous ethics of reciprocity as a normative and epistemological framework that challenges Western linear conceptions of time, economy, and social relations. Drawing from Marcel Mauss’s theory of the gift and Andean traditions, reciprocity is conceived not as mere exchange but as a vital principle of survival that structures human, communal, and cosmic relations. The acts of giving, receiving, and returning are understood as sacred obligations that guarantee continuity, balance, and mutual recognition across generations. Andean temporality, cyclical and rooted in natural and agricultural rhythms, situates reciprocity beyond economic utility and embeds it within cultural and cosmological orders. Historical encounters, such as colonial Christianity, illustrate the adaptive and inclusive nature of reciprocity, which facilitated intercultural coexistence and survival. Within this horizon, wealth is defined as relational abundance, measured by kinship networks and the capacity to fulfill communal obligations. Practices such as ayni (reciprocal labor), minka (collective work), and randi-randi (generalized reciprocity) embody this ethical system. Reciprocity thus emerges as both epistemology and moral system: a categorical imperative that governs enduring relations among humans, nature, and the cosmos. Returning a gift is not optional but a universal moral principle grounded in respect for Indigenous law, often misunderstood within Western frameworks. Despite growing critiques of Artificial Intelligence, non-Western epistemologies remain excluded. AI redefines truth as technical construction, suppressing subjectivity, dissent, and plurality, while fostering algorithmic obedience that undermines political imagination. The ethics of reciprocity offers a counterpoint, demanding the recovery of relational responsibility as a moral and political principle to guide the development of more just, situated, and human technologies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00879-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fostering an enabling environment for health AI innovation and scale: The need for tailored ethics training for innovators in low- and middle-income countries 为卫生人工智能创新和规模创造有利环境:需要为低收入和中等收入国家的创新者提供量身定制的道德培训
Pub Date : 2025-12-17 DOI: 10.1007/s43681-025-00921-3
Raffaele Joseph, Liya Wassie, Hailemichael Getachew, Doris Wangari, Evelyn Gitau, Claude Pirmez, Dominique Laforest, Richa Vashishtha, Ndeya M. Samb, Zoleka Ngcete, Mosoka P. Fallah, Rosa Tsegaye Aga, Michael Mihut, Andreas Alois Reis, Abraham Aseffa, Alemseged Abdissa

Artificial intelligence (AI) is a rapidly evolving technology with transformative potential across many fields, including health. Interest in AI is growing globally, including in low- and middle-income countries (LMICs), but few studies have assessed the knowledge, experiences, and ethical practices of innovators in this space. This study aimed to assess the knowledge, experiences, challenges, and ethical practices of researchers and innovators, involved in AI-driven health research, to support the responsible integration of AI into health innovation. A cross-sectional, semi-quantitative online survey was conducted between February and May 2025 among health AI innovators in LMICs, primarily through the Grand Challenges network and its partners. Fifty respondents from 13 countries participated, the majority (92%) emphasized the importance of ethical principles, and 80% perceived AI-related ethical risks as greater than moderate. However, the majority of respondents (74%) were not sure about their competence in identifying and addressing the risks associated with AI practices and the same proportion reported having received no formal training in AI ethics. Respondents identified key challenges such as weak governance, limited ethics capacity, and risks of bias and data misuse. There was strong demand for training on ethics, bias mitigation, explainability, data governance, and privacy. The findings reveal an encouraging level of awareness but emphasize the need for formal training, clearer guidelines, and context-specific frameworks to ensure ethical AI development and deployment in health research.

人工智能(AI)是一项快速发展的技术,在包括卫生在内的许多领域具有变革潜力。对人工智能的兴趣在全球范围内不断增长,包括在低收入和中等收入国家(LMICs),但很少有研究评估这一领域创新者的知识、经验和道德实践。本研究旨在评估参与人工智能驱动的卫生研究的研究人员和创新者的知识、经验、挑战和道德实践,以支持将人工智能负责任地纳入卫生创新。主要通过大挑战网络及其合作伙伴,在2025年2月至5月期间对低收入和中等收入国家的卫生人工智能创新者进行了一项横断面半定量在线调查。来自13个国家的50名受访者参与了调查,其中大多数(92%)强调道德原则的重要性,80%的人认为人工智能相关的道德风险大于中等。然而,大多数受访者(74%)不确定自己是否有能力识别和解决与人工智能实践相关的风险,同样比例的受访者表示没有接受过人工智能伦理方面的正式培训。受访者指出了治理薄弱、道德能力有限、存在偏见和数据滥用风险等主要挑战。对道德、减少偏见、可解释性、数据治理和隐私方面的培训有强烈的需求。调查结果显示了令人鼓舞的认识水平,但强调需要进行正式培训、更明确的指导方针和针对具体情况的框架,以确保在卫生研究中开发和部署合乎道德的人工智能。
{"title":"Fostering an enabling environment for health AI innovation and scale: The need for tailored ethics training for innovators in low- and middle-income countries","authors":"Raffaele Joseph,&nbsp;Liya Wassie,&nbsp;Hailemichael Getachew,&nbsp;Doris Wangari,&nbsp;Evelyn Gitau,&nbsp;Claude Pirmez,&nbsp;Dominique Laforest,&nbsp;Richa Vashishtha,&nbsp;Ndeya M. Samb,&nbsp;Zoleka Ngcete,&nbsp;Mosoka P. Fallah,&nbsp;Rosa Tsegaye Aga,&nbsp;Michael Mihut,&nbsp;Andreas Alois Reis,&nbsp;Abraham Aseffa,&nbsp;Alemseged Abdissa","doi":"10.1007/s43681-025-00921-3","DOIUrl":"10.1007/s43681-025-00921-3","url":null,"abstract":"<div>\u0000 \u0000 <p>Artificial intelligence (AI) is a rapidly evolving technology with transformative potential across many fields, including health. Interest in AI is growing globally, including in low- and middle-income countries (LMICs), but few studies have assessed the knowledge, experiences, and ethical practices of innovators in this space. This study aimed to assess the knowledge, experiences, challenges, and ethical practices of researchers and innovators, involved in AI-driven health research, to support the responsible integration of AI into health innovation. A cross-sectional, semi-quantitative online survey was conducted between February and May 2025 among health AI innovators in LMICs, primarily through the Grand Challenges network and its partners. Fifty respondents from 13 countries participated, the majority (92%) emphasized the importance of ethical principles, and 80% perceived AI-related ethical risks as greater than moderate. However, the majority of respondents (74%) were not sure about their competence in identifying and addressing the risks associated with AI practices and the same proportion reported having received no formal training in AI ethics. Respondents identified key challenges such as weak governance, limited ethics capacity, and risks of bias and data misuse. There was strong demand for training on ethics, bias mitigation, explainability, data governance, and privacy. The findings reveal an encouraging level of awareness but emphasize the need for formal training, clearer guidelines, and context-specific frameworks to ensure ethical AI development and deployment in health research.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00921-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming healthcare: a critical analysis of artificial intelligence in reforming the U.S. system 医疗保健转型:人工智能在美国体制改革中的关键分析
Pub Date : 2025-12-16 DOI: 10.1007/s43681-025-00924-0
Alberto Boretti

The U.S. healthcare system is plagued by systemic challenges, including corporate dominance and a profound erosion of public trust in institutions such as the CDC and FDA. This crisis, rooted in inefficiencies, inequities, and a profit-centric model, has left millions without adequate care, exacerbating health disparities and fueling a public health emergency. This paper presents an argument that while artificial intelligence (AI) applications in healthcare governance could potentially remedy this problematic situation, its implementation is fraught with challenges. The article offers an on-point discussion of various governance weaknesses that exist within the U.S. health system, but it also critically examines whether AI technology genuinely has the potential to tackle deep-rooted systematic and institutional failures. AI might mitigate these significant institutional issues in some contexts but could also inadvertently exacerbate them in others if not implemented with rigorous oversight. A particularly unsatisfactory aspect of current proposals is the inadequate and superficial address of data quality and accessibility concerns—which are absolutely essential for the successful implementation of AI. Given that both the data infrastructure and legal frameworks currently in place are insufficient, this paper argues that a more critical and nuanced analysis is required to navigate the practical and ethical challenges facing healthcare transformation. This vision requires dismantling the problematic aspects of a profit-driven model and addressing the moral and structural failures that have left the U.S. lagging behind its peers, with a clear understanding that AI is a tool that requires careful, ethical, and equitable implementation.

美国医疗保健系统受到系统性挑战的困扰,包括企业主导以及公众对疾病预防控制中心和FDA等机构的信任受到严重侵蚀。这场危机的根源在于效率低下、不公平和以利润为中心的模式,它使数百万人得不到充分的护理,加剧了卫生差距并加剧了公共卫生紧急情况。本文提出了一种观点,即尽管人工智能(AI)在医疗保健治理中的应用可能会解决这一问题,但其实施却充满了挑战。这篇文章对美国卫生系统中存在的各种治理弱点进行了切中要害的讨论,但它也批判性地审视了人工智能技术是否真正有潜力解决根深蒂固的系统性和制度性失败。在某些情况下,人工智能可能会缓解这些重大的制度问题,但如果没有严格的监督,人工智能也可能在无意中加剧其他情况。当前提案中一个特别不令人满意的方面是对数据质量和可访问性问题的不充分和肤浅的解决——这对于成功实施人工智能是绝对必要的。鉴于目前的数据基础设施和法律框架都不足,本文认为需要进行更关键和细致入微的分析,以应对医疗保健转型面临的实际和道德挑战。这一愿景需要拆除利润驱动模式的问题方面,解决使美国落后于同行的道德和结构性失败,并清楚地认识到人工智能是一种需要谨慎、道德和公平实施的工具。
{"title":"Transforming healthcare: a critical analysis of artificial intelligence in reforming the U.S. system","authors":"Alberto Boretti","doi":"10.1007/s43681-025-00924-0","DOIUrl":"10.1007/s43681-025-00924-0","url":null,"abstract":"<div><p>The U.S. healthcare system is plagued by systemic challenges, including corporate dominance and a profound erosion of public trust in institutions such as the CDC and FDA. This crisis, rooted in inefficiencies, inequities, and a profit-centric model, has left millions without adequate care, exacerbating health disparities and fueling a public health emergency. This paper presents an argument that while artificial intelligence (AI) applications in healthcare governance could potentially remedy this problematic situation, its implementation is fraught with challenges. The article offers an on-point discussion of various governance weaknesses that exist within the U.S. health system, but it also critically examines whether AI technology genuinely has the potential to tackle deep-rooted systematic and institutional failures. AI might mitigate these significant institutional issues in some contexts but could also inadvertently exacerbate them in others if not implemented with rigorous oversight. A particularly unsatisfactory aspect of current proposals is the inadequate and superficial address of data quality and accessibility concerns—which are absolutely essential for the successful implementation of AI. Given that both the data infrastructure and legal frameworks currently in place are insufficient, this paper argues that a more critical and nuanced analysis is required to navigate the practical and ethical challenges facing healthcare transformation. This vision requires dismantling the problematic aspects of a profit-driven model and addressing the moral and structural failures that have left the U.S. lagging behind its peers, with a clear understanding that AI is a tool that requires careful, ethical, and equitable implementation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1