首页 > 最新文献

AI and ethics最新文献

英文 中文
AI practical wisdom and compassion 实用的智慧和同情心
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00877-4
Mark Graves

Practical wisdom (phronesis) is the context-sensitive capacity to skillfully achieve morally good outcomes in complex situations. Developing artificial practical wisdom offers a more ethically robust and achievable goal for AI development than artificial general intelligence (AGI). While identifying what is morally good in ethically complex situations remains challenging, grounding artificial practical wisdom explicitly in compassion effectively reduces ethical risks associated with AI-induced suffering, surpassing conventional alignment strategies like rule-based guardrails or predefined reward systems. As a theoretical foundation for initial development of artificial practical wisdom, this virtue ethics approach integrates Aristotelian practical wisdom with cross-cultural perspectives on suffering and compassion from utilitarianism, the Capability Approach, Buddhism, and contemporary moral psychology. Operationalizing compassionate AI involves recognizing suffering, empathetic engagement, context-sensitive moral decision making, and motivational responses. Compassionate AI not only serves as a foundation for broader practical wisdom development but also demonstrates immediate practical benefits, particularly in healthcare, by measurably improving patient outcomes, enhancing well-being, and reducing caregiver burdens.

实践智慧(phronesis)是在复杂情况下巧妙地实现道德良好结果的上下文敏感能力。与通用人工智能(AGI)相比,开发人工实用智慧为人工智能的发展提供了一个更道德、更可行的目标。虽然在道德复杂的情况下确定什么是道德上的善仍然具有挑战性,但将人工实践智慧明确地建立在同情之上,有效地降低了与人工智能引发的痛苦相关的道德风险,超越了传统的对齐策略,如基于规则的护栏或预定义的奖励系统。作为人工实践智慧初步发展的理论基础,这种美德伦理方法将亚里士多德的实践智慧与功利主义、能力方法、佛教和当代道德心理学对痛苦和同情的跨文化观点相结合。实施富有同情心的人工智能包括识别痛苦、移情参与、情境敏感的道德决策和动机反应。富有同情心的人工智能不仅可以作为更广泛的实用智慧发展的基础,而且还可以通过显着改善患者的治疗结果、提高幸福感和减轻护理人员的负担,显示出直接的实际效益,特别是在医疗保健领域。
{"title":"AI practical wisdom and compassion","authors":"Mark Graves","doi":"10.1007/s43681-025-00877-4","DOIUrl":"10.1007/s43681-025-00877-4","url":null,"abstract":"<div><p>Practical wisdom (<i>phronesis</i>) is the context-sensitive capacity to skillfully achieve morally good outcomes in complex situations. Developing artificial practical wisdom offers a more ethically robust and achievable goal for AI development than artificial general intelligence (AGI). While identifying what is morally good in ethically complex situations remains challenging, grounding artificial practical wisdom explicitly in compassion effectively reduces ethical risks associated with AI-induced suffering, surpassing conventional alignment strategies like rule-based guardrails or predefined reward systems. As a theoretical foundation for initial development of artificial practical wisdom, this virtue ethics approach integrates Aristotelian practical wisdom with cross-cultural perspectives on suffering and compassion from utilitarianism, the Capability Approach, Buddhism, and contemporary moral psychology. Operationalizing compassionate AI involves recognizing suffering, empathetic engagement, context-sensitive moral decision making, and motivational responses. Compassionate AI not only serves as a foundation for broader practical wisdom development but also demonstrates immediate practical benefits, particularly in healthcare, by measurably improving patient outcomes, enhancing well-being, and reducing caregiver burdens.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00877-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What does an oral lesion look like in the global south? Rethinking AI, equity, and data justice in oral cancer diagnosis 在南半球,口腔病变是什么样子的?重新思考口腔癌诊断中的人工智能、公平和数据公正
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00875-6
Vidith Phillips, Paravreet Woodwal

Oral cancer remains a major yet underdiagnosed burden in low- and middle-income countries (LMICs), where high-risk behaviors, late-stage presentation, and limited diagnostic infrastructure contribute to poor outcomes. Artificial intelligence (AI), particularly vision-language foundation models such as CLIP and SAM, has emerged as a potential tool for zero-shot oral lesion detection, offering scalable diagnostic support without requiring large annotated datasets. However, these models are predominantly trained on Western-centric, light-skinned image corpora, raising concerns about fairness and generalizability in global health contexts. This review critically explores the diagnostic capabilities, limitations, and ethical considerations of using foundation models for oral lesion detection in LMICs, emphasizing their potential impact on equity and data justice. We analyzed interdisciplinary literature spanning medical imaging, oral oncology, digital health ethics, and AI fairness. Empirical studies evaluating AI performance across photographic, histological, and cytological data types were examined alongside implementation case studies and policy frameworks. Key themes included model generalization, domain adaptation, and data governance. While foundation models achieve competitive zero-shot performance in select lesion classification and segmentation tasks, their sensitivity declines when applied to darker-skinned individuals and field-acquired images from LMICs. Bias mitigation strategies such as prompt engineering, few-shot fine-tuning, and federated learning show promise but remain underutilized. Current development pipelines often lack transparency, community participation, and subgroup validation, leading to epistemic and diagnostic inequities. Foundation models offer significant potential for democratizing oral cancer screening, particularly in resource-constrained settings. However, realizing this promise requires locally grounded adaptation, inclusive dataset design, and participatory governance. Without these, zero-shot AI may reinforce rather than resolve existing disparities in oral healthcare. These findings emphasize the need for actionable collaboration among policymakers, clinicians, and AI developers. Integrating fairness metrics into regulatory review and encouraging locally validated deployment protocols can help translate these technologies into safe, equitable screening pathways in low-resource settings.

在低收入和中等收入国家(LMICs),口腔癌仍然是一个主要但未得到诊断的负担,在这些国家,高风险行为、晚期表现和有限的诊断基础设施导致预后不良。人工智能(AI),特别是视觉语言基础模型(如CLIP和SAM),已经成为一种潜在的零shot口腔病变检测工具,提供可扩展的诊断支持,而无需大型注释数据集。然而,这些模型主要是在以西方为中心的浅肤色图像语料库上进行训练的,这引起了人们对全球卫生背景下公平性和普遍性的担忧。这篇综述批判性地探讨了在中低收入国家使用基础模型进行口腔病变检测的诊断能力、局限性和伦理考虑,强调了它们对公平和数据公正的潜在影响。我们分析了跨学科的文献,包括医学影像学、口腔肿瘤学、数字健康伦理和人工智能公平性。通过摄影、组织学和细胞学数据类型评估人工智能性能的实证研究,以及实施案例研究和政策框架进行了审查。关键主题包括模型泛化、领域适应和数据治理。虽然基础模型在选择病变分类和分割任务中具有竞争性的零射击性能,但当应用于肤色较深的个体和来自低收入国家的现场采集图像时,它们的灵敏度下降。诸如即时工程、少量微调和联合学习等偏见缓解策略显示出希望,但仍未得到充分利用。当前的开发管道通常缺乏透明度、社区参与和子群体验证,从而导致认知和诊断的不平等。基金会模式为口腔癌筛查的民主化提供了巨大的潜力,特别是在资源有限的环境中。然而,实现这一承诺需要立足当地的适应、包容性数据集设计和参与式治理。如果没有这些,零射击人工智能可能会加剧而不是解决口腔保健方面现有的差距。这些发现强调了决策者、临床医生和人工智能开发人员之间开展可行合作的必要性。将公平指标整合到监管审查中,并鼓励当地验证的部署协议,有助于将这些技术转化为资源匮乏环境中安全、公平的筛选途径。
{"title":"What does an oral lesion look like in the global south? Rethinking AI, equity, and data justice in oral cancer diagnosis","authors":"Vidith Phillips,&nbsp;Paravreet Woodwal","doi":"10.1007/s43681-025-00875-6","DOIUrl":"10.1007/s43681-025-00875-6","url":null,"abstract":"<div><p>Oral cancer remains a major yet underdiagnosed burden in low- and middle-income countries (LMICs), where high-risk behaviors, late-stage presentation, and limited diagnostic infrastructure contribute to poor outcomes. Artificial intelligence (AI), particularly vision-language foundation models such as CLIP and SAM, has emerged as a potential tool for zero-shot oral lesion detection, offering scalable diagnostic support without requiring large annotated datasets. However, these models are predominantly trained on Western-centric, light-skinned image corpora, raising concerns about fairness and generalizability in global health contexts. This review critically explores the diagnostic capabilities, limitations, and ethical considerations of using foundation models for oral lesion detection in LMICs, emphasizing their potential impact on equity and data justice. We analyzed interdisciplinary literature spanning medical imaging, oral oncology, digital health ethics, and AI fairness. Empirical studies evaluating AI performance across photographic, histological, and cytological data types were examined alongside implementation case studies and policy frameworks. Key themes included model generalization, domain adaptation, and data governance. While foundation models achieve competitive zero-shot performance in select lesion classification and segmentation tasks, their sensitivity declines when applied to darker-skinned individuals and field-acquired images from LMICs. Bias mitigation strategies such as prompt engineering, few-shot fine-tuning, and federated learning show promise but remain underutilized. Current development pipelines often lack transparency, community participation, and subgroup validation, leading to epistemic and diagnostic inequities. Foundation models offer significant potential for democratizing oral cancer screening, particularly in resource-constrained settings. However, realizing this promise requires locally grounded adaptation, inclusive dataset design, and participatory governance. Without these, zero-shot AI may reinforce rather than resolve existing disparities in oral healthcare. These findings emphasize the need for actionable collaboration among policymakers, clinicians, and AI developers. Integrating fairness metrics into regulatory review and encouraging locally validated deployment protocols can help translate these technologies into safe, equitable screening pathways in low-resource settings.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The potential of AI-driven truth technologies: opportunities, risks and governance 人工智能驱动的真相技术的潜力:机遇、风险和治理
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00901-7
Beatriz Paniego Béjar, Lennart Schweser, Magnus Hagelsteen, Per Becker

The paper explores how AI-driven truth technologies—tools and methods for determining the accuracy and truthfulness of information, statements, or claims—can be developed and utilised to mitigate misinformation and disinformation. Expectations, opportunities, risks, and governance principles are identified through 23 semi-structured qualitative interviews with distinguished professionals and academics. The results suggest that AI capabilities are expected to continue advancing and rapidly escalating AI-generated misinformation and disinformation, which undermine democracies and amplify societal polarisation. AI-driven truth technologies present opportunities to counter this development, including improved fact-checking, evaluation of publishing sources, and the contextualisation of information. Hence, truth technologies are expected to be locked into an inevitable arms race against increasingly sophisticated disinformation techniques. The results also suggest risks associated with such truth technologies, including oversimplification and bias, users’ overreliance, and misuse for manipulation or censorship, and emphasise the need for effective management and regulation to mitigate these risks. Truth technologies should, as such, be decentralised, transparent, explainable, overseen by trustworthy bodies, and complemented by education in critical thinking. In addition to the importance of effective management and public education, the paper emphasises the crucial importance of understanding the effects of power structures in shaping the development of AI. Users, developers and regulators of AI-driven truth technologies are encouraged to consider the expectations, scale the opportunities, mitigate the risks, and follow the outlined governance principles to maximise the potential of such truth technologies in counteracting misinformation and disinformation.

本文探讨了如何开发和利用人工智能驱动的真相技术——确定信息、陈述或主张的准确性和真实性的工具和方法——来减轻错误信息和虚假信息。期望、机会、风险和治理原则是通过与杰出的专业人士和学者的23个半结构化定性访谈来确定的。结果表明,人工智能的能力预计将继续发展,并迅速升级人工智能产生的错误信息和虚假信息,这些信息和虚假信息破坏了民主,加剧了社会两极分化。人工智能驱动的真相技术为应对这一发展提供了机会,包括改进事实核查、对出版来源的评估以及信息的背景化。因此,真相技术预计将陷入一场不可避免的军备竞赛,以对抗日益复杂的虚假信息技术。研究结果还提出了与此类真相技术相关的风险,包括过度简化和偏见、用户过度依赖、滥用操纵或审查,并强调需要有效的管理和监管来减轻这些风险。因此,真相技术应该是分散的、透明的、可解释的、由值得信赖的机构监督的,并辅以批判性思维的教育。除了有效管理和公共教育的重要性外,该论文还强调了理解权力结构对人工智能发展的影响的至关重要性。鼓励人工智能驱动的真相技术的用户、开发者和监管者考虑期望、把握机会、降低风险,并遵循概述的治理原则,最大限度地发挥此类真相技术在对抗错误信息和虚假信息方面的潜力。
{"title":"The potential of AI-driven truth technologies: opportunities, risks and governance","authors":"Beatriz Paniego Béjar,&nbsp;Lennart Schweser,&nbsp;Magnus Hagelsteen,&nbsp;Per Becker","doi":"10.1007/s43681-025-00901-7","DOIUrl":"10.1007/s43681-025-00901-7","url":null,"abstract":"<div>\u0000 \u0000 <p>The paper explores how AI-driven truth technologies—tools and methods for determining the accuracy and truthfulness of information, statements, or claims—can be developed and utilised to mitigate misinformation and disinformation. Expectations, opportunities, risks, and governance principles are identified through 23 semi-structured qualitative interviews with distinguished professionals and academics. The results suggest that AI capabilities are expected to continue advancing and rapidly escalating AI-generated misinformation and disinformation, which undermine democracies and amplify societal polarisation. AI-driven truth technologies present opportunities to counter this development, including improved fact-checking, evaluation of publishing sources, and the contextualisation of information. Hence, truth technologies are expected to be locked into an inevitable arms race against increasingly sophisticated disinformation techniques. The results also suggest risks associated with such truth technologies, including oversimplification and bias, users’ overreliance, and misuse for manipulation or censorship, and emphasise the need for effective management and regulation to mitigate these risks. Truth technologies should, as such, be decentralised, transparent, explainable, overseen by trustworthy bodies, and complemented by education in critical thinking. In addition to the importance of effective management and public education, the paper emphasises the crucial importance of understanding the effects of power structures in shaping the development of AI. Users, developers and regulators of AI-driven truth technologies are encouraged to consider the expectations, scale the opportunities, mitigate the risks, and follow the outlined governance principles to maximise the potential of such truth technologies in counteracting misinformation and disinformation.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00901-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Not a mirror, a caricature: How LLMs reproduce cultural identity? 不是镜子,是漫画:法学硕士如何再现文化认同?
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00898-z
Jakub Wdowicz

We examine whether large language models reproduce human-like cultural identity or instead produce deterministic cultural prototypes. Using the 30-item Self-Construal Scale (SCS), we asked two GPT models to answer as if they were American, Polish, Japanese, or with no specified identity, in English or Polish, under weak vs. strong identity cues (2 × 4 × 2; 30 runs per cell). Identity prompts yielded almost perfectly separable cultural profiles with near-zero within culture variance. In neutral conditions, outputs systematically skewed toward a U.S. profile in 75% of tests, consistent with an anglocentric default. A simple nearest-neighbor classifier achieved 99.8% leave-one-out accuracy in predicting the assigned cultural identity from item-level responses, confirming near-perfect profile separability. These findings indicate that, for cultural identity reproduction, current models behave as caricaturists rather than mirrors of human cultural variation.

我们研究了大型语言模型是否再现了类似人类的文化身份,还是产生了确定性的文化原型。使用30项自我解释量表(SCS),我们要求两个GPT模型在弱和强身份提示(2 × 4 × 2;每单元30次)下,用英语或波兰语回答他们是否为美国人、波兰人、日本人或没有特定身份的人。身份提示产生了几乎完全可分离的文化概况,文化差异接近于零。在中性条件下,75%的测试结果系统性地向美国倾斜,与以英国为中心的违约相一致。一个简单的近邻分类器在从项目级反应中预测指定的文化身份方面达到了99.8%的留一准确率,证实了近乎完美的轮廓可分离性。这些发现表明,对于文化认同的复制,当前的模型表现为漫画家,而不是人类文化差异的镜子。
{"title":"Not a mirror, a caricature: How LLMs reproduce cultural identity?","authors":"Jakub Wdowicz","doi":"10.1007/s43681-025-00898-z","DOIUrl":"10.1007/s43681-025-00898-z","url":null,"abstract":"<div><p>We examine whether large language models reproduce human-like cultural identity or instead produce deterministic cultural prototypes. Using the 30-item Self-Construal Scale (SCS), we asked two GPT models to answer as if they were American, Polish, Japanese, or with no specified identity, in English or Polish, under weak vs. strong identity cues (2 × 4 × 2; 30 runs per cell). Identity prompts yielded almost perfectly separable cultural profiles with near-zero within culture variance. In neutral conditions, outputs systematically skewed toward a U.S. profile in 75% of tests, consistent with an anglocentric default. A simple nearest-neighbor classifier achieved 99.8% leave-one-out accuracy in predicting the assigned cultural identity from item-level responses, confirming near-perfect profile separability. These findings indicate that, for cultural identity reproduction, current models behave as caricaturists rather than mirrors of human cultural variation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00898-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The AI-powered soft skills renaissance: cultivating human abilities in the digital era 人工智能驱动的软技能复兴:培养数字时代的人类能力
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00912-4
M. Muthukumar, Sitharaj Ajithkumar, M. Mohana Sundaram, B. Dhananjeiyan

This study reformulates the discussion on the integration of artificial intelligence (AI), viewing it not solely as a substitute for human labor but as a stimulus for the revival of soft skills. As AI progressively automates routine and technical tasks, it concurrently heightens the requirement for high-value human competencies including emotional intelligence, creativity, and adaptive leadership. The article examines AI-driven platforms that are transforming soft skills development via scalable and tailored learning techniques, such as virtual reality simulations and predictive analytics. The primary assertion indicates that a sustainable future of employment relies on the collaborative interplay between human and artificial intelligence. The discussion concludes by addressing essential ethical considerations, emphasizing the necessity of a human-centered framework to cultivate an inventive, responsible, and ethically conscious workforce.

这项研究重新阐述了对人工智能(AI)整合的讨论,不仅将其视为人类劳动的替代品,而且将其视为软技能复兴的刺激因素。随着人工智能逐渐将日常和技术任务自动化,它同时也提高了对高价值人类能力的要求,包括情商、创造力和适应性领导能力。本文研究了人工智能驱动的平台,这些平台正在通过可扩展和定制的学习技术(如虚拟现实模拟和预测分析)改变软技能的发展。第一个断言表明,可持续的未来就业依赖于人类和人工智能之间的协作相互作用。最后,讨论了基本的道德考虑,强调了以人为本的框架的必要性,以培养一支有创造力、负责任、有道德意识的员工队伍。
{"title":"The AI-powered soft skills renaissance: cultivating human abilities in the digital era","authors":"M. Muthukumar,&nbsp;Sitharaj Ajithkumar,&nbsp;M. Mohana Sundaram,&nbsp;B. Dhananjeiyan","doi":"10.1007/s43681-025-00912-4","DOIUrl":"10.1007/s43681-025-00912-4","url":null,"abstract":"<div>\u0000 \u0000 <p>This study reformulates the discussion on the integration of artificial intelligence (AI), viewing it not solely as a substitute for human labor but as a stimulus for the revival of soft skills. As AI progressively automates routine and technical tasks, it concurrently heightens the requirement for high-value human competencies including emotional intelligence, creativity, and adaptive leadership. The article examines AI-driven platforms that are transforming soft skills development via scalable and tailored learning techniques, such as virtual reality simulations and predictive analytics. The primary assertion indicates that a sustainable future of employment relies on the collaborative interplay between human and artificial intelligence. The discussion concludes by addressing essential ethical considerations, emphasizing the necessity of a human-centered framework to cultivate an inventive, responsible, and ethically conscious workforce.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creativity in the age of generative AI 生成式人工智能时代的创造力
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00848-9
Arisa Yasuda, Yoshihiro Maruyama

Generative AI has significantly impacted creative practices, prompting a fundamental reconsideration of what constitutes creativity. Traditionally, creativity has been understood mainly as a human capacity shaped by historical and cultural contexts, and the emergence of generative AI has prompted a reexamination of the conceptual boundaries and epistemological status of creativity. This study explores how the creativity exhibited by generative AI differs from traditional human creativity, and how these systems reshape conceptual understandings of creativity and transform creative practices. Our principal contribution is a normative framework for human–AI co-creation, articulated through ethical, creative, and social principles (grounded in Beneficence/Non-Maleficence, Justice/Fairness, Autonomy, and Explicability). We operationalize these principles into stakeholder guidance, thereby yielding actionable criteria for ethical, responsible, and meaningful co-creation.

生成式人工智能对创造性实践产生了重大影响,促使人们从根本上重新思考什么是创造力。传统上,创造力主要被理解为由历史和文化背景塑造的人类能力,而生成式人工智能的出现促使人们重新审视创造力的概念界限和认识论地位。本研究探讨了生成式人工智能所表现出的创造力与传统人类创造力的不同之处,以及这些系统如何重塑对创造力的概念理解并改变创造性实践。我们的主要贡献是人类与人工智能共同创造的规范框架,通过道德、创造性和社会原则(基于仁慈/非恶意、正义/公平、自治和可解释性)来表达。我们将这些原则运用到利益相关者的指导中,从而产生道德的、负责任的和有意义的共同创造的可操作标准。
{"title":"Creativity in the age of generative AI","authors":"Arisa Yasuda,&nbsp;Yoshihiro Maruyama","doi":"10.1007/s43681-025-00848-9","DOIUrl":"10.1007/s43681-025-00848-9","url":null,"abstract":"<div><p>Generative AI has significantly impacted creative practices, prompting a fundamental reconsideration of what constitutes creativity. Traditionally, creativity has been understood mainly as a human capacity shaped by historical and cultural contexts, and the emergence of generative AI has prompted a reexamination of the conceptual boundaries and epistemological status of creativity. This study explores how the creativity exhibited by generative AI differs from traditional human creativity, and how these systems reshape conceptual understandings of creativity and transform creative practices. Our principal contribution is a normative framework for human–AI co-creation, articulated through ethical, creative, and social principles (grounded in Beneficence/Non-Maleficence, Justice/Fairness, Autonomy, and Explicability). We operationalize these principles into stakeholder guidance, thereby yielding actionable criteria for ethical, responsible, and meaningful co-creation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness principles across contexts: evaluating gender disparities of facts and opinions in large language models 跨语境的公平原则:在大型语言模型中评估事实和观点的性别差异
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00876-5
Sofie Goethals, Lauren Rhue, Arun Sundararajan

This paper examines how fairness principles differ when evaluating large language model (LLM) outputs in fact-based versus opinion-based contexts, focusing on gender disparities in responses related to notable individuals. Using prompts designed to elicit either factual information (identifying Nobel Prize winners) or subjective judgments (identifying the most accomplished figures in a field), we analyze responses from GPT-4, Claude, and Llama-3. For fact-based tasks, fairness is assessed through correctness and refusal rates, revealing minimal gender disparities when models achieve high accuracy, although refusal patterns can vary by model and gender. For opinion-based tasks, where no single correct answer exists, fairness is operationalized through representational metrics such as demographic parity and disparate impact. Results show substantial gender disparities in opinion-based outputs across all models, with representation shaped by prompt wording (e.g., “important” vs. “prestigious”), subject domain, and inclusion of secondary answers. However, the highly skewed context makes the final assessment about fairness challenging. Our findings highlight that fairness metrics and interpretations must be contextualized by output type. Performance parity is an appropriate goal for fact-based outputs, whereas representational inclusivity is central for opinion-based outputs. Representational inclusivity alone may not be sufficient when the context for the LLM’s task differs from the population. We discuss theoretical implications for fairness evaluation, noting that high performance can mitigate disparities in factual contexts but that opinion-based contexts require more nuanced, values-driven approaches.

本文研究了在基于事实和基于意见的环境中评估大型语言模型(LLM)输出时,公平原则的差异,重点关注与知名人士相关的回应中的性别差异。使用旨在引出事实信息(确定诺贝尔奖获得者)或主观判断(确定一个领域中最有成就的人物)的提示,我们分析了GPT-4、Claude和Llama-3的反应。对于基于事实的任务,公平性是通过正确性和拒绝率来评估的,当模型达到较高的准确性时,显示出最小的性别差异,尽管拒绝模式可能因模型和性别而异。对于没有单一正确答案的基于意见的任务,公平性是通过代表性指标(如人口均等和差异影响)来实现的。结果显示,在所有模型中,基于意见的输出存在显著的性别差异,其表现形式由提示措辞(例如,“重要”与“享有声望”)、主题领域和包含次要答案组成。然而,高度倾斜的背景使得对公平性的最终评估具有挑战性。我们的研究结果强调,公平指标和解释必须根据输出类型进行语境化。对于基于事实的输出来说,绩效均等是一个合适的目标,而对于基于意见的输出来说,代表性包容性是核心目标。当法学硕士的任务背景与人口不同时,仅具有代表性的包容性可能是不够的。我们讨论了公平评估的理论含义,指出高绩效可以减轻事实背景下的差异,但基于意见的背景需要更细微的、价值观驱动的方法。
{"title":"Fairness principles across contexts: evaluating gender disparities of facts and opinions in large language models","authors":"Sofie Goethals,&nbsp;Lauren Rhue,&nbsp;Arun Sundararajan","doi":"10.1007/s43681-025-00876-5","DOIUrl":"10.1007/s43681-025-00876-5","url":null,"abstract":"<div><p>This paper examines how fairness principles differ when evaluating large language model (LLM) outputs in fact-based versus opinion-based contexts, focusing on gender disparities in responses related to notable individuals. Using prompts designed to elicit either factual information (identifying Nobel Prize winners) or subjective judgments (identifying the most accomplished figures in a field), we analyze responses from GPT-4, Claude, and Llama-3. For fact-based tasks, fairness is assessed through correctness and refusal rates, revealing minimal gender disparities when models achieve high accuracy, although refusal patterns can vary by model and gender. For opinion-based tasks, where no single correct answer exists, fairness is operationalized through representational metrics such as demographic parity and disparate impact. Results show substantial gender disparities in opinion-based outputs across all models, with representation shaped by prompt wording (e.g., “important” vs. “prestigious”), subject domain, and inclusion of secondary answers. However, the highly skewed context makes the final assessment about fairness challenging. Our findings highlight that fairness metrics and interpretations must be contextualized by output type. Performance parity is an appropriate goal for fact-based outputs, whereas representational inclusivity is central for opinion-based outputs. Representational inclusivity alone may not be sufficient when the context for the LLM’s task differs from the population. We discuss theoretical implications for fairness evaluation, noting that high performance can mitigate disparities in factual contexts but that opinion-based contexts require more nuanced, values-driven approaches.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00876-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teaching in the age of artificial intelligence: teachers’ ethical-digital competencies 人工智能时代的教学:教师的伦理-数字能力
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00917-z
Rotem Waitzman

The expanding presence of Artificial Intelligence (AI) in education raises new pedagogical possibilities as well as important ethical considerations. This study explored teachers’ perceptions and experiences related to the ethical dimensions of AI integration in the Israeli school system. Using a mixed-methods design, quantitative data were collected from 108 teachers and complemented by qualitative analyses of open-ended reflections. The quantitative results indicated strong endorsement of ethical principles, including transparency, fairness and accountability, alongside moderate levels of self reported confidence and limited participation in formal AI related training. The qualitative findings showed that while many teachers had not yet encountered AI related dilemmas directly, they described a range of possible approaches to hypothetical situations, including dialogic engagement, establishing classroom guidelines and applying corrective measures.

These findings provide a descriptive picture of how teachers currently understand AI related ethical issues in contexts where institutional norms and classroom practices are still developing. To support future discussion, the study introduces the Ethical Digital Competence (EDC) framework as a conceptual lens that highlights the interplay between knowledge, values and reflective practice. The framework is intended as a contribution to ongoing conversations about responsible AI integration and may inform the design of professional learning opportunities as educational systems continue to adapt to emerging technologies.

人工智能(AI)在教育领域的不断扩大,带来了新的教学可能性,也带来了重要的伦理问题。本研究探讨了教师对以色列学校系统中人工智能整合的伦理维度的看法和经验。采用混合方法设计,从108名教师中收集定量数据,并辅以开放式反思的定性分析。定量结果表明,人们强烈支持道德原则,包括透明度、公平性和问责制,以及适度的自我报告信心和有限的参与正式的人工智能相关培训。定性研究结果表明,虽然许多教师尚未直接遇到与人工智能相关的困境,但他们描述了一系列可能的方法来应对假设的情况,包括对话参与、建立课堂指导方针和应用纠正措施。这些发现提供了一幅描述性的画面,说明在制度规范和课堂实践仍在发展的背景下,教师目前如何理解与人工智能相关的伦理问题。为了支持未来的讨论,本研究引入了伦理数字能力(EDC)框架作为一个概念镜头,强调了知识、价值观和反思实践之间的相互作用。该框架旨在为正在进行的关于负责任的人工智能集成的对话做出贡献,并可能在教育系统继续适应新兴技术的同时,为专业学习机会的设计提供信息。
{"title":"Teaching in the age of artificial intelligence: teachers’ ethical-digital competencies","authors":"Rotem Waitzman","doi":"10.1007/s43681-025-00917-z","DOIUrl":"10.1007/s43681-025-00917-z","url":null,"abstract":"<div>\u0000 \u0000 <p>The expanding presence of Artificial Intelligence (AI) in education raises new pedagogical possibilities as well as important ethical considerations. This study explored teachers’ perceptions and experiences related to the ethical dimensions of AI integration in the Israeli school system. Using a mixed-methods design, quantitative data were collected from 108 teachers and complemented by qualitative analyses of open-ended reflections. The quantitative results indicated strong endorsement of ethical principles, including transparency, fairness and accountability, alongside moderate levels of self reported confidence and limited participation in formal AI related training. The qualitative findings showed that while many teachers had not yet encountered AI related dilemmas directly, they described a range of possible approaches to hypothetical situations, including dialogic engagement, establishing classroom guidelines and applying corrective measures.</p>\u0000 <p>These findings provide a descriptive picture of how teachers currently understand AI related ethical issues in contexts where institutional norms and classroom practices are still developing. To support future discussion, the study introduces the Ethical Digital Competence (EDC) framework as a conceptual lens that highlights the interplay between knowledge, values and reflective practice. The framework is intended as a contribution to ongoing conversations about responsible AI integration and may inform the design of professional learning opportunities as educational systems continue to adapt to emerging technologies.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing an artificial intelligence ethics governance checklist for the legal community 为法律界制定人工智能伦理治理清单
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00841-2
Stephanie Kelley

This study develops a stakeholder-informed artificial intelligence (AI) ethics governance checklist tailored for Canadian law firms to help them harness the productivity and economic advantages of AI while minimizing the risks of unethical outcomes. Recognizing the limitations of existing AI principles (AIPs) in preventing unethical outcomes, this research uses semi-structured interviews, qualitative content analysis, and expert stakeholder engagement to design an eight-page AI ethics governance checklist. In addition to the output of a practical governance checklist, the study reports findings about the development of stakeholder-informed governance checklists. The findings reveal that Canadian lawyers share global concerns surrounding AI risks, including privacy, accountability, safety and security, transparency and explainability, human oversight, professional responsibility, and the promotion of human values. In addition, many law firms interact with AI primarily through third-party vendors, making a principle-based checklist the most practical approach. The research highlights the importance of question format, suggesting that balancing clarity (using Yes/No options) with flexibility (allowing for open-ended comments) is essential, given the complex ethical considerations. The study also finds there is a need to integrate the checklist with existing policies, such as privacy impact assessments and IT risk evaluations, alongside relevant regulatory frameworks. Additionally, tailoring language and definitions to reflect the specific needs of stakeholders (in this case, lawyers) enhances usability and effectiveness. The resulting eight-page, stakeholder-informed AI ethics governance checklist has been adopted by several Canadian law firms and Barristers’ Societies, offering a practical tool to guide the responsible adoption and use of AI in the legal sector.

本研究开发了一份为加拿大律师事务所量身定制的利益相关者知情的人工智能(AI)道德治理清单,以帮助他们利用人工智能的生产力和经济优势,同时最大限度地降低不道德结果的风险。认识到现有人工智能原则(AIPs)在防止不道德结果方面的局限性,本研究使用半结构化访谈、定性内容分析和专家利益相关者参与设计了一份8页的人工智能伦理治理清单。除了实际治理检查表的输出之外,该研究还报告了关于开发涉众知情的治理检查表的发现。调查结果显示,加拿大律师对人工智能风险的担忧与全球一致,包括隐私、问责制、安全和保障、透明度和可解释性、人类监督、职业责任和促进人类价值观。此外,许多律师事务所主要通过第三方供应商与人工智能互动,这使得基于原则的清单成为最实用的方法。研究强调了问题格式的重要性,表明考虑到复杂的道德考虑,平衡清晰性(使用是/否选项)和灵活性(允许开放式评论)是至关重要的。该研究还发现,有必要将清单与现有政策(如隐私影响评估和IT风险评估)以及相关监管框架相结合。此外,裁剪语言和定义以反映涉众(在本例中是律师)的特定需求,可以增强可用性和有效性。由此产生的八页、利益相关者知情的人工智能道德治理清单已被几家加拿大律师事务所和大律师协会采用,为指导法律部门负责任地采用和使用人工智能提供了实用工具。
{"title":"Developing an artificial intelligence ethics governance checklist for the legal community","authors":"Stephanie Kelley","doi":"10.1007/s43681-025-00841-2","DOIUrl":"10.1007/s43681-025-00841-2","url":null,"abstract":"<div><p>This study develops a stakeholder-informed artificial intelligence (AI) ethics governance checklist tailored for Canadian law firms to help them harness the productivity and economic advantages of AI while minimizing the risks of unethical outcomes. Recognizing the limitations of existing AI principles (AIPs) in preventing unethical outcomes, this research uses semi-structured interviews, qualitative content analysis, and expert stakeholder engagement to design an eight-page AI ethics governance checklist. In addition to the output of a practical governance checklist, the study reports findings about the development of stakeholder-informed governance checklists. The findings reveal that Canadian lawyers share global concerns surrounding AI risks, including privacy, accountability, safety and security, transparency and explainability, human oversight, professional responsibility, and the promotion of human values. In addition, many law firms interact with AI primarily through third-party vendors, making a principle-based checklist the most practical approach. The research highlights the importance of question format, suggesting that balancing clarity (using Yes/No options) with flexibility (allowing for open-ended comments) is essential, given the complex ethical considerations. The study also finds there is a need to integrate the checklist with existing policies, such as privacy impact assessments and IT risk evaluations, alongside relevant regulatory frameworks. Additionally, tailoring language and definitions to reflect the specific needs of stakeholders (in this case, lawyers) enhances usability and effectiveness. The resulting eight-page, stakeholder-informed AI ethics governance checklist has been adopted by several Canadian law firms and Barristers’ Societies, offering a practical tool to guide the responsible adoption and use of AI in the legal sector.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00841-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a sociotechnical ecology of artificial intelligence: power, accountability, and governance in a global context 迈向人工智能的社会技术生态:全球背景下的权力、责任和治理
Pub Date : 2025-12-05 DOI: 10.1007/s43681-025-00902-6
Andrés Domínguez Hernández, Antonella Maia Perini, Semeli Hadjiloizou, Ann Borda, Sabeehah Mahomed, David Leslie

Contemporary artificial intelligence (AI) technologies, particularly those based on foundation models and released at scale, are globally entangled and made up of a complex array of interrelated actors, practices, and transnational flows of resources. The rapid pace at which AI systems are being developed and distributed, is driving significant societal and planetary transformations. While much of the international agenda around governing AI has converged around downstream matters of safe deployment and use, deeper systemic issues—including power concentration, uneven environmental costs, or the asymmetric extraction of data and labour by technology companies—remain contested and unresolved areas of debate. In this paper we centre these systemic challenges, and locate governance entry points aimed at fostering more just futures. We conceptualise contemporary large-scale AI as a sociotechnical ecology comprised of interrelated actors, practices, and asymmetrical resource flows. Using the lens of infrastructural inversion within social studies of infrastructure, we trace the actors involved in the making of AI technologies, their interdependences, and long-term infrastructural continuities that shape them. We argue that new AI models and systems are not unprecedented but are instead built upon and shaped by pre-existing infrastructures, entrenched market relations, and socio-historical patterns. By making visible the sites of accountabilities and technical and non-technical intervention in the AI ecology, we identify four governance imperatives for sustainable and equitable AI governance: (1) Decentralising AI infrastructure, (2) Advancing environmental and epistemic justice through pluriversal AI governance, (3) Instituting cross-border data and data work governance, and (4) Enhancing international coordination, participation and solidarity.

当代人工智能(AI)技术,特别是那些基于基础模型和大规模发布的技术,在全球范围内纠缠在一起,由一系列相互关联的参与者、实践和跨国资源流动组成。人工智能系统的开发和分布速度之快,正在推动重大的社会和地球变革。尽管围绕人工智能治理的许多国际议程都集中在安全部署和使用的下游问题上,但更深层次的系统性问题——包括权力集中、不平衡的环境成本,或科技公司对数据和劳动力的不对称提取——仍然存在争议和未解决的辩论领域。在本文中,我们以这些系统性挑战为中心,并找到旨在促进更公正未来的治理切入点。我们将当代大规模人工智能概念化为一个由相互关联的行动者、实践和不对称资源流动组成的社会技术生态。利用基础设施社会研究中的基础设施倒置的视角,我们追踪了参与人工智能技术制造的行动者,它们的相互依赖性,以及塑造它们的长期基础设施连续性。我们认为,新的人工智能模型和系统并非前所未有,而是建立在已有的基础设施、根深蒂固的市场关系和社会历史模式的基础上并受到其影响。通过展示人工智能生态中的问责以及技术和非技术干预,我们确定了可持续和公平的人工智能治理的四个治理要求:(1)分散人工智能基础设施;(2)通过多元人工智能治理推进环境和知识正义;(3)建立跨境数据和数据工作治理;(4)加强国际协调、参与和团结。
{"title":"Towards a sociotechnical ecology of artificial intelligence: power, accountability, and governance in a global context","authors":"Andrés Domínguez Hernández,&nbsp;Antonella Maia Perini,&nbsp;Semeli Hadjiloizou,&nbsp;Ann Borda,&nbsp;Sabeehah Mahomed,&nbsp;David Leslie","doi":"10.1007/s43681-025-00902-6","DOIUrl":"10.1007/s43681-025-00902-6","url":null,"abstract":"<div><p>Contemporary artificial intelligence (AI) technologies, particularly those based on foundation models and released at scale, are globally entangled and made up of a complex array of interrelated actors, practices, and transnational flows of resources. The rapid pace at which AI systems are being developed and distributed, is driving significant societal and planetary transformations. While much of the international agenda around governing AI has converged around downstream matters of safe deployment and use, deeper systemic issues—including power concentration, uneven environmental costs, or the asymmetric extraction of data and labour by technology companies—remain contested and unresolved areas of debate. In this paper we centre these systemic challenges, and locate governance entry points aimed at fostering more just futures. We conceptualise contemporary large-scale AI as a sociotechnical ecology comprised of interrelated actors, practices, and asymmetrical resource flows. Using the lens of <i>infrastructural inversion</i> within social studies of infrastructure, we trace the actors involved in the making of AI technologies, their interdependences, and long-term infrastructural continuities that shape them. We argue that new AI models and systems are not unprecedented but are instead built upon and shaped by pre-existing infrastructures, entrenched market relations, and socio-historical patterns. By making visible the sites of accountabilities and technical and non-technical intervention in the AI ecology, we identify four governance imperatives for sustainable and equitable AI governance: (1) Decentralising AI infrastructure, (2) Advancing environmental and epistemic justice through pluriversal AI governance, (3) Instituting cross-border data and data work governance, and (4) Enhancing international coordination, participation and solidarity.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00902-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1