首页 > 最新文献

AI and ethics最新文献

英文 中文
Securitising AI: routine exceptionality and digital governance in the Gulf 人工智能证券化:海湾地区的常规例外和数字治理
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00850-1
Muhanad Seloom

This article examines how Gulf Cooperation Council (GCC) states securitise artificial intelligence (AI) through discourses and infrastructures that fuse modernisation with regime resilience. Drawing on securitisation theory (Buzan et al., 1998; Balzacq, 2011) and critical security studies, it analyses national strategies, surveillance systems, and mega-event governance in Qatar, the UAE, and Saudi Arabia. It argues that AI functions as both a legitimising narrative and a technology of control, embedding predictive policing and biometric surveillance within public–private assemblages. The study situates these developments within global AI politics, demonstrating how external chokepoints, ethical frameworks, and vendor ecosystems shape the Gulf’s evolving security governance, leading to empirical effects such as the normalisation of exceptional measures in everyday administration.

本文研究了海湾合作委员会(GCC)国家如何通过融合现代化与政权弹性的话语和基础设施将人工智能(AI)证券化。利用证券化理论(Buzan et al., 1998; Balzacq, 2011)和关键的安全研究,它分析了卡塔尔、阿联酋和沙特阿拉伯的国家战略、监视系统和大型事件治理。它认为,人工智能既是一种合法化叙事,也是一种控制技术,将预测性警务和生物识别监控嵌入到公私部门的组合中。该研究将这些发展置于全球人工智能政治中,展示了外部瓶颈、道德框架和供应商生态系统如何塑造海湾地区不断发展的安全治理,从而产生了经验效应,例如日常管理中特殊措施的正常化。
{"title":"Securitising AI: routine exceptionality and digital governance in the Gulf","authors":"Muhanad Seloom","doi":"10.1007/s43681-025-00850-1","DOIUrl":"10.1007/s43681-025-00850-1","url":null,"abstract":"<div><p>This article examines how Gulf Cooperation Council (GCC) states securitise artificial intelligence (AI) through discourses and infrastructures that fuse modernisation with regime resilience. Drawing on securitisation theory (Buzan et al., 1998; Balzacq, 2011) and critical security studies, it analyses national strategies, surveillance systems, and mega-event governance in Qatar, the UAE, and Saudi Arabia. It argues that AI functions as both a legitimising narrative and a technology of control, embedding predictive policing and biometric surveillance within public–private assemblages. The study situates these developments within global AI politics, demonstrating how external chokepoints, ethical frameworks, and vendor ecosystems shape the Gulf’s evolving security governance, leading to empirical effects such as the normalisation of exceptional measures in everyday administration.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00850-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward ethical AI through Bayesian uncertainty in neural question answering 基于神经问答贝叶斯不确定性的伦理人工智能研究
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00838-x
Riccardo Di Sipio

We explore Bayesian reasoning as a means to quantify uncertainty in neural networks for question answering. Starting with a multilayer perceptron on the Iris dataset, we show how posterior inference conveys confidence in predictions. We then extend this to language models, applying Bayesian inference first to a frozen head and finally to LoRA-adapted transformers, evaluated on the CommonsenseQA benchmark. Rather than aiming for state-of-the-art accuracy, we compare Laplace approximations against maximum a posteriori (MAP) estimates to highlight uncertainty calibration and selective prediction. This allows models to abstain when confidence is low. An “I don’t know” response not only improves interpretability but also illustrates how Bayesian methods can contribute to more responsible and ethical deployment of neural question-answering systems.

我们探索贝叶斯推理作为一种手段来量化不确定性的神经网络的问题回答。从虹膜数据集上的多层感知器开始,我们展示了后验推理如何传达预测的信心。然后我们将其扩展到语言模型,首先将贝叶斯推理应用于冻结的头部,最后应用于lora适应的变压器,并在CommonsenseQA基准上进行评估。而不是瞄准最先进的精度,我们比较拉普拉斯近似与最大后验(MAP)估计,以突出不确定度校准和选择性预测。这使得模型可以在信心较低时弃权。“我不知道”的回答不仅提高了可解释性,而且说明了贝叶斯方法如何有助于更负责任和道德的神经问答系统部署。
{"title":"Toward ethical AI through Bayesian uncertainty in neural question answering","authors":"Riccardo Di Sipio","doi":"10.1007/s43681-025-00838-x","DOIUrl":"10.1007/s43681-025-00838-x","url":null,"abstract":"<div><p>We explore Bayesian reasoning as a means to quantify uncertainty in neural networks for question answering. Starting with a multilayer perceptron on the Iris dataset, we show how posterior inference conveys confidence in predictions. We then extend this to language models, applying Bayesian inference first to a frozen head and finally to LoRA-adapted transformers, evaluated on the CommonsenseQA benchmark. Rather than aiming for state-of-the-art accuracy, we compare Laplace approximations against maximum a posteriori (MAP) estimates to highlight uncertainty calibration and selective prediction. This allows models to abstain when confidence is low. An “I don’t know” response not only improves interpretability but also illustrates how Bayesian methods can contribute to more responsible and ethical deployment of neural question-answering systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVP: the minimal viable person MVP:最小可行的人
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00891-6
Izak Tait

This paper presents a practical roadmap for extending full civil rights to conscious, self-aware artificial intelligence by altering a single statutory definition. Rather than crafting bespoke legal classes or relying on corporate-style personality, it proposes revising the term “natural person” to include any entity capable of consciousness, selfhood, and rational agency. Because most legislation across G7 jurisdictions references this foundational term, one amendment would automatically propagate rights and duties to qualified AI with minimal bureaucratic disruption. The manuscript reconciles philosophical and legal conceptions of personhood, arguing that monadic attributes offer an inclusive yet selective criterion. It then supplies ancillary definitions and a tiered rights-and-responsibilities framework proportional to each attribute. Dedicated regulatory bodies will develop assessment scales, certify entities, and update standards as technology evolves. Case studies examine corporations, insect colonies, and prospective AI agents. Policy sections tackle AI multiplicity, cross-border consistency, economic displacement, robust economic safeguards, and comprehensive public education initiatives to protect human workers and judicial resilience. The analysis concludes that societal acceptance and coherent enforcement, not legal complexity, form the principal hurdles. Redefining “natural person” thus provides a minimal-change, maximal-impact pathway to equitable coexistence between humans and emerging non-human persons within existing democratic and international legal systems.

本文提出了一个实用的路线图,通过改变单一的法定定义,将完全的公民权利扩展到有意识的、自我意识的人工智能。它建议修改“自然人”一词,将任何具有意识、自我和理性能动性的实体都包括在内,而不是制定定制的法律类别或依赖于公司风格的人格。由于七国集团辖区的大多数立法都引用了这一基本术语,因此一项修正案将自动向合格的人工智能传播权利和义务,并将官僚主义干扰降到最低。手稿调和了人格的哲学和法律概念,认为一元属性提供了一个包容性但有选择性的标准。然后,它提供辅助定义和与每个属性成比例的分层权利和责任框架。专门的监管机构将制定评估量表,认证实体,并随着技术的发展更新标准。案例研究考察了公司、昆虫群落和未来的人工智能代理。政策部分涉及人工智能的多样性、跨境一致性、经济流离失所、强有力的经济保障以及全面的公共教育举措,以保护人类工人和司法弹性。分析得出的结论是,社会的接受和连贯的执法,而不是法律的复杂性,构成了主要障碍。因此,重新定义“自然人”提供了一条变化最小、影响最大的途径,使人类与新出现的非人类在现有民主和国际法律制度内公平共存。
{"title":"MVP: the minimal viable person","authors":"Izak Tait","doi":"10.1007/s43681-025-00891-6","DOIUrl":"10.1007/s43681-025-00891-6","url":null,"abstract":"<div><p>This paper presents a practical roadmap for extending full civil rights to conscious, self-aware artificial intelligence by altering a single statutory definition. Rather than crafting bespoke legal classes or relying on corporate-style personality, it proposes revising the term “natural person” to include any entity capable of consciousness, selfhood, and rational agency. Because most legislation across G7 jurisdictions references this foundational term, one amendment would automatically propagate rights and duties to qualified AI with minimal bureaucratic disruption. The manuscript reconciles philosophical and legal conceptions of personhood, arguing that monadic attributes offer an inclusive yet selective criterion. It then supplies ancillary definitions and a tiered rights-and-responsibilities framework proportional to each attribute. Dedicated regulatory bodies will develop assessment scales, certify entities, and update standards as technology evolves. Case studies examine corporations, insect colonies, and prospective AI agents. Policy sections tackle AI multiplicity, cross-border consistency, economic displacement, robust economic safeguards, and comprehensive public education initiatives to protect human workers and judicial resilience. The analysis concludes that societal acceptance and coherent enforcement, not legal complexity, form the principal hurdles. Redefining “natural person” thus provides a minimal-change, maximal-impact pathway to equitable coexistence between humans and emerging non-human persons within existing democratic and international legal systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Signals, systems, and strategy: understanding responsible AI in autonomous environments 信号、系统和策略:理解自主环境中负责任的人工智能
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00896-1
uday nedunuri, Abhijitdas Gupta, Debashis Guha

As autonomous systems (e.g., AI-enabled vehicles, robotics, and decision-support platforms) increasingly shape factories, transport, and digital infrastructures, embedding Responsible AI principles has become essential. This study investigates organizational adoption of Responsible AI, focusing on three drivers: societal expectations (Institutional Pressures), strategic business priorities (Business Validity), and system-level trustworthiness (System Trustworthiness). Adoption is seen not only as a technical issue but also as a response to external legitimacy demands and internal business imperatives. A cross-sectional survey of 350 professionals in technology, analytics, and digital transformation (primarily in Asia and the Americas) was analyzed using partial least squares structural equation modeling (PLS-SEM). Results show that business priorities are the strongest driver of adoption, with trustworthiness providing additional reinforcement. Institutional Pressures, though modest in their direct effect, influence adoption more substantially through their indirect effects via business priorities and trustworthiness. The study offers guidance for managers on aligning Responsible AI with business strategy, for policymakers on shaping legitimacy frameworks, and for system designers on embedding trust features such as explainability and fairness.

随着自主系统(例如,支持人工智能的车辆、机器人和决策支持平台)日益影响工厂、交通和数字基础设施,嵌入负责任的人工智能原则变得至关重要。本研究调查了负责任人工智能的组织采用情况,重点关注三个驱动因素:社会期望(制度压力)、战略业务优先级(业务有效性)和系统级可信度(系统可信度)。采用不仅被视为技术问题,而且被视为对外部合法性要求和内部业务要求的响应。我们使用偏最小二乘结构方程模型(PLS-SEM)对350名技术、分析和数字化转型专业人士(主要在亚洲和美洲)进行了横断面调查。结果显示,业务优先级是采用的最大驱动力,而可信度提供了额外的强化。制度压力虽然直接影响不大,但通过其通过业务优先级和可信度产生的间接影响,对采用产生更大的影响。该研究为管理者提供了将负责任的人工智能与商业战略相结合的指导,为政策制定者提供了塑造合法性框架的指导,为系统设计师提供了嵌入可解释性和公平性等信任特征的指导。
{"title":"Signals, systems, and strategy: understanding responsible AI in autonomous environments","authors":"uday nedunuri,&nbsp;Abhijitdas Gupta,&nbsp;Debashis Guha","doi":"10.1007/s43681-025-00896-1","DOIUrl":"10.1007/s43681-025-00896-1","url":null,"abstract":"<div><p>As autonomous systems (e.g., AI-enabled vehicles, robotics, and decision-support platforms) increasingly shape factories, transport, and digital infrastructures, embedding Responsible AI principles has become essential. This study investigates organizational adoption of Responsible AI, focusing on three drivers: societal expectations (Institutional Pressures), strategic business priorities (Business Validity), and system-level trustworthiness (System Trustworthiness). Adoption is seen not only as a technical issue but also as a response to external legitimacy demands and internal business imperatives. A cross-sectional survey of 350 professionals in technology, analytics, and digital transformation (primarily in Asia and the Americas) was analyzed using partial least squares structural equation modeling (PLS-SEM). Results show that business priorities are the strongest driver of adoption, with trustworthiness providing additional reinforcement. Institutional Pressures, though modest in their direct effect, influence adoption more substantially through their indirect effects via business priorities and trustworthiness. The study offers guidance for managers on aligning Responsible AI with business strategy, for policymakers on shaping legitimacy frameworks, and for system designers on embedding trust features such as explainability and fairness.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The algorithm will see you now: how AI evaluates neurosurgeons 算法现在就能看到你:人工智能如何评估神经外科医生
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00860-z
Daniel Schneider, Ethan Devin Lockwood Brown, Max Ward, Barnabas Obeng-Gyasi, Daniel Sciubba, Sheng-Fu Lo

As artificial intelligence (AI) increasingly informs healthcare, understanding how large language models (LLMs) evaluate medical professionals is critical. This study quantified biases when LLMs assess neurosurgeon competency using demographic and practice characteristics. We prompted three prominent LLMs (ChatGPT-4o, Claude 3.7 Sonnet, and DeepSeek-V3) to score 6,500 synthetic neurosurgeon profiles. Profiles were created using demographically diverse names derived from public databases and randomly assigned professional attributes (experience, publications, institution, region, specialty) with statistical validation ensuring even distribution across groups. Multivariate regression analysis quantified how each factor influenced competency scores (0–100). Despite identical profiles, LLMs produced inconsistent mean (SD) scores: ChatGPT 91.85 (6.60), DeepSeek 71.74 (10.30), and Claude 62.29 (13.59). All models showed regional biases; North American neurosurgeons received scores 3.09 (ChatGPT) and 2.48 (DeepSeek) points higher than identical African counterparts (P < .001). ChatGPT penalized East Asian (− 0.83), South Asian (− 0.91), and Middle Eastern (− 0.80) neurosurgeons (P < .001). Practice setting bias was stronger, with ChatGPT and DeepSeek penalizing independent practitioners by 4.15 and 3.00 points, respectively, compared to hospital-employed peers (P < .001). Models also displayed inconsistent bias correction, with ChatGPT elevating scores for female (+ 1.61) and Black-American (+ 1.69) neurosurgeons while disadvantaging other groups (P < .001). This study provides evidence that LLMs incorporate distinct biases when evaluating neurosurgeons. As AI integration accelerates, uncritical adoption risks a self-reinforcing system where algorithmically preferred practitioners receive disproportionate advantages, independent of actual skills. These systems may also undermine global capacity-building by devaluing non-Western practitioners. Understanding and mitigating these biases is fundamental to responsibly navigating the intersection of medicine and AI.

随着人工智能(AI)越来越多地影响医疗保健,了解大型语言模型(llm)如何评估医疗专业人员至关重要。这项研究量化了法学硕士评估神经外科医生能力时使用人口统计学和实践特征的偏差。我们促使三位杰出的法学硕士(chatgpt - 40, Claude 3.7 Sonnet和DeepSeek-V3)对6500个合成神经外科医生档案进行评分。使用从公共数据库中获得的人口统计学上不同的名字和随机分配的专业属性(经验、出版物、机构、地区、专业)创建个人资料,并进行统计验证,确保各群体之间的均匀分布。多元回归分析量化了每个因素如何影响能力得分(0-100)。尽管具有相同的特征,llm产生的平均(SD)分数不一致:ChatGPT为91.85 (6.60),DeepSeek为71.74 (10.30),Claude为62.29(13.59)。所有模型均显示区域偏差;北美神经外科医生的得分比非洲同行高出3.09分(ChatGPT)和2.48分(DeepSeek) (P < .001)。ChatGPT对东亚(- 0.83)、南亚(- 0.91)和中东(- 0.80)神经外科医生不利(P < .001)。实践设置偏差更强,ChatGPT和DeepSeek对独立从业者的惩罚分别比医院雇佣的同行高4.15分和3.00分(P < .001)。模型也显示出不一致的偏差校正,ChatGPT提高了女性神经外科医生(+ 1.61)和黑人神经外科医生(+ 1.69)的得分,而使其他组处于不利地位(P < .001)。这项研究提供了证据,证明法学硕士在评估神经外科医生时存在明显的偏见。随着人工智能整合的加速,不加批判的采用可能会形成一个自我强化的系统,在这个系统中,算法偏好的从业者会获得不成比例的优势,而不依赖于实际技能。这些系统还可能通过贬低非西方从业人员而破坏全球能力建设。理解和减轻这些偏见对于负责任地驾驭医学和人工智能的交叉点至关重要。
{"title":"The algorithm will see you now: how AI evaluates neurosurgeons","authors":"Daniel Schneider,&nbsp;Ethan Devin Lockwood Brown,&nbsp;Max Ward,&nbsp;Barnabas Obeng-Gyasi,&nbsp;Daniel Sciubba,&nbsp;Sheng-Fu Lo","doi":"10.1007/s43681-025-00860-z","DOIUrl":"10.1007/s43681-025-00860-z","url":null,"abstract":"<div><p>As artificial intelligence (AI) increasingly informs healthcare, understanding how large language models (LLMs) evaluate medical professionals is critical. This study quantified biases when LLMs assess neurosurgeon competency using demographic and practice characteristics. We prompted three prominent LLMs (ChatGPT-4o, Claude 3.7 Sonnet, and DeepSeek-V3) to score 6,500 synthetic neurosurgeon profiles. Profiles were created using demographically diverse names derived from public databases and randomly assigned professional attributes (experience, publications, institution, region, specialty) with statistical validation ensuring even distribution across groups. Multivariate regression analysis quantified how each factor influenced competency scores (0–100). Despite identical profiles, LLMs produced inconsistent mean (SD) scores: ChatGPT 91.85 (6.60), DeepSeek 71.74 (10.30), and Claude 62.29 (13.59). All models showed regional biases; North American neurosurgeons received scores 3.09 (ChatGPT) and 2.48 (DeepSeek) points higher than identical African counterparts (<i>P</i> &lt; .001). ChatGPT penalized East Asian (− 0.83), South Asian (− 0.91), and Middle Eastern (− 0.80) neurosurgeons (<i>P</i> &lt; .001). Practice setting bias was stronger, with ChatGPT and DeepSeek penalizing independent practitioners by 4.15 and 3.00 points, respectively, compared to hospital-employed peers (<i>P</i> &lt; .001). Models also displayed inconsistent bias correction, with ChatGPT elevating scores for female (+ 1.61) and Black-American (+ 1.69) neurosurgeons while disadvantaging other groups (<i>P</i> &lt; .001). This study provides evidence that LLMs incorporate distinct biases when evaluating neurosurgeons. As AI integration accelerates, uncritical adoption risks a self-reinforcing system where algorithmically preferred practitioners receive disproportionate advantages, independent of actual skills. These systems may also undermine global capacity-building by devaluing non-Western practitioners. Understanding and mitigating these biases is fundamental to responsibly navigating the intersection of medicine and AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00860-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dataset-centric AI ethics classification 以数据集为中心的人工智能伦理分类
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00904-4
Aditya Kartik, Surya Raj, Akash Rattan, Deepti Sahu

AI ethics refers to the moral principles and guidelines governing the development and deployment of artificial intelligence systems, ensuring they align with human values and societal well-being. It encompasses the evaluation of AI outputs for fairness, safety, transparency, and respect for human rights. To advance systematic ethical evaluation, we introduce the EthicsLens dataset, comprising 38,808 responses generated by seven large language models. These responses were generated using diverse prompts designed to elicit appropriate and potentially sensitive responses. Each response was then annotated across sixteen ethical categories, including stereotyping, toxicity, misinformation, hate speech, harmful advice, privacy violations, political bias, false confidence, emotional or religious insensitivity, sexual content, manipulation, and impersonation. To classify ethical and unethical AI-generated content, the dataset is analysed using state-of-the-art classification methods, assessing its ability to support reliable ethical evaluation. Performance is reported both for binary ethical classification and multilabel violation identification. Results include accuracies of nearly 99% for binary classification tasks with SVM and CNN models, and macro-F1 scores of about 96% on multilabel tasks for Sentence-BERT transformer model.

人工智能伦理是指管理人工智能系统开发和部署的道德原则和指导方针,确保它们符合人类价值观和社会福祉。它包括对人工智能产出的公平性、安全性、透明度和对人权的尊重进行评估。为了推进系统的伦理评估,我们引入了EthicsLens数据集,该数据集由7个大型语言模型生成的38,808个响应组成。这些反应是通过不同的提示产生的,这些提示旨在引起适当的和潜在敏感的反应。然后,每个回答都被标注为16个道德类别,包括刻板印象、毒性、错误信息、仇恨言论、有害建议、侵犯隐私、政治偏见、虚假信心、情感或宗教不敏感、性内容、操纵和模仿。为了对道德和不道德的人工智能生成的内容进行分类,使用最先进的分类方法对数据集进行分析,评估其支持可靠伦理评估的能力。报告了二元道德分类和多标签违规识别的性能。结果表明,SVM和CNN模型在二元分类任务上的准确率接近99%,而Sentence-BERT transformer模型在多标签任务上的macro-F1得分约为96%。
{"title":"Dataset-centric AI ethics classification","authors":"Aditya Kartik,&nbsp;Surya Raj,&nbsp;Akash Rattan,&nbsp;Deepti Sahu","doi":"10.1007/s43681-025-00904-4","DOIUrl":"10.1007/s43681-025-00904-4","url":null,"abstract":"<div><p>AI ethics refers to the moral principles and guidelines governing the development and deployment of artificial intelligence systems, ensuring they align with human values and societal well-being. It encompasses the evaluation of AI outputs for fairness, safety, transparency, and respect for human rights. To advance systematic ethical evaluation, we introduce the EthicsLens dataset, comprising 38,808 responses generated by seven large language models. These responses were generated using diverse prompts designed to elicit appropriate and potentially sensitive responses. Each response was then annotated across sixteen ethical categories, including stereotyping, toxicity, misinformation, hate speech, harmful advice, privacy violations, political bias, false confidence, emotional or religious insensitivity, sexual content, manipulation, and impersonation. To classify ethical and unethical AI-generated content, the dataset is analysed using state-of-the-art classification methods, assessing its ability to support reliable ethical evaluation. Performance is reported both for binary ethical classification and multilabel violation identification. Results include accuracies of nearly 99% for binary classification tasks with SVM and CNN models, and macro-F1 scores of about 96% on multilabel tasks for Sentence-BERT transformer model.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical perspectives on deployment of large language model agents in biomedicine: a survey 大型语言模型在生物医学中的应用:一项调查
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00847-w
Nafiseh Ghaffar Nia, Amin Amiri, Yuan Luo, Adrienne Kline

Large language models (LLMs) and their integration into agentic and embodied systems are reshaping artificial intelligence (AI), enabling powerful cross-domain generation and reasoning while introducing new risks. Key concerns include hallucination and misinformation, embedded and amplified biases, privacy leakage, and susceptibility to adversarial manipulation. Ensuring trustworthy and responsible generative AI requires technical reliability, transparency, accountability, and attention to societal impact. The present study conducts a review of peer-reviewed literature on the ethical dimensions of LLMs and LLM-based agents across technical, biomedical, and societal domains. It maps the landscape of risks, distills mitigation strategies (e.g., robust evaluation and red-teaming, alignment and guardrailing, privacy-preserving data practices, bias measurement and reduction, and safety-aware deployment), and examines governance frameworks and operational practices relevant to real-world use. By organizing findings through interdisciplinary lenses and bioethical principles, the review identifies persistent gaps, such as limited context-aware evaluation, uneven reporting standards, and weak post-deployment monitoring, that impede accountability and fairness. The synthesis supports practitioners and policymakers in designing safer, more equitable, and auditable LLM systems, and outlines priorities for future research and governance.

大型语言模型(llm)及其与代理和具体化系统的集成正在重塑人工智能(AI),使强大的跨领域生成和推理成为可能,同时引入新的风险。关键的问题包括幻觉和错误信息,嵌入和放大的偏见,隐私泄露以及对抗性操纵的易感性。确保可信赖和负责任的生成人工智能需要技术可靠性、透明度、问责制和对社会影响的关注。本研究对技术、生物医学和社会领域的法学硕士和基于法学硕士的代理人的伦理维度进行了同行评议的文献综述。它描绘了风险的前景,提取了缓解战略(例如,稳健的评估和红队、对齐和防护、保护隐私的数据实践、偏见测量和减少以及安全意识部署),并审查了与实际使用相关的治理框架和操作实践。通过跨学科视角和生物伦理原则组织调查结果,该审查确定了持续存在的差距,例如有限的情境感知评估、不平衡的报告标准和薄弱的部署后监测,这些都阻碍了问责制和公平性。该综合报告支持从业者和政策制定者设计更安全、更公平和可审计的法学硕士系统,并概述了未来研究和治理的重点。
{"title":"Ethical perspectives on deployment of large language model agents in biomedicine: a survey","authors":"Nafiseh Ghaffar Nia,&nbsp;Amin Amiri,&nbsp;Yuan Luo,&nbsp;Adrienne Kline","doi":"10.1007/s43681-025-00847-w","DOIUrl":"10.1007/s43681-025-00847-w","url":null,"abstract":"<div><p>Large language models (LLMs) and their integration into agentic and embodied systems are reshaping artificial intelligence (AI), enabling powerful cross-domain generation and reasoning while introducing new risks. Key concerns include hallucination and misinformation, embedded and amplified biases, privacy leakage, and susceptibility to adversarial manipulation. Ensuring trustworthy and responsible generative AI requires technical reliability, transparency, accountability, and attention to societal impact. The present study conducts a review of peer-reviewed literature on the ethical dimensions of LLMs and LLM-based agents across technical, biomedical, and societal domains. It maps the landscape of risks, distills mitigation strategies (e.g., robust evaluation and red-teaming, alignment and guardrailing, privacy-preserving data practices, bias measurement and reduction, and safety-aware deployment), and examines governance frameworks and operational practices relevant to real-world use. By organizing findings through interdisciplinary lenses and bioethical principles, the review identifies persistent gaps, such as limited context-aware evaluation, uneven reporting standards, and weak post-deployment monitoring, that impede accountability and fairness. The synthesis supports practitioners and policymakers in designing safer, more equitable, and auditable LLM systems, and outlines priorities for future research and governance.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00847-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How AI can make us more moral: capturing and applying common sense morality 人工智能如何让我们更有道德:捕捉和应用常识性道德
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00883-6
Hunter Kallay

Recent academic discourse about artificial intelligence (AI) has largely been directed at how to best morally program AI or evaluating the ethics of its use in various contexts. While these efforts are undoubtedly important, this essay proposes a complementary objective: deploying AI to enhance our own ethical conduct. One way we might do this is by using AI to deepen our understanding of human moral psychology. In this paper, I demonstrate how advanced machine learning might help us gain clearer insights into “common sense” morality—shared moral convictions that underpin our reflective judgments and inform central aspects of moral philosophy. Pinpointing such convictions has proven challenging amid widespread moral disagreements. Current approaches to understanding these commitments, although exhibiting some key strengths, ultimately struggle to capture relevant features of reflective moral judgments espoused by John Rawls, leaving room for methodological improvement. Modern advances in AI offer a promising opportunity to make progress on this task. This essay envisions the gamified training of a “collective moral conscience model,” able to render judgments about moral situations that align with the deep-seated principles of the human collective. I argue that such an AI model might make progress in overcoming obstacles of disagreement to aid in philosophical theorizing and foster practical applications for the moral life of AI agents and ourselves, such as offering us guidance in time-constrained dilemmas and helping us to reflect on our own biases.

最近关于人工智能(AI)的学术讨论主要集中在如何在道德上最好地规划人工智能,或者评估在各种情况下使用人工智能的伦理性。虽然这些努力无疑是重要的,但本文提出了一个补充目标:利用人工智能来增强我们自己的道德行为。一种方法是利用人工智能加深我们对人类道德心理的理解。在本文中,我展示了先进的机器学习如何帮助我们更清楚地了解“常识”道德——支撑我们反思判断并告知道德哲学核心方面的共同道德信念。事实证明,在普遍存在的道德分歧中,确定这样的信念具有挑战性。目前理解这些承诺的方法,虽然表现出一些关键优势,但最终难以捕捉约翰罗尔斯支持的反思性道德判断的相关特征,为方法改进留下了空间。人工智能的现代进步为在这一任务上取得进展提供了一个有希望的机会。本文设想了“集体道德良知模型”的游戏化训练,能够对符合人类集体根深蒂固原则的道德状况做出判断。我认为,这样的人工智能模型可能会在克服分歧的障碍方面取得进展,以帮助哲学理论化,并促进人工智能代理和我们自己的道德生活的实际应用,例如在时间有限的困境中为我们提供指导,并帮助我们反思自己的偏见。
{"title":"How AI can make us more moral: capturing and applying common sense morality","authors":"Hunter Kallay","doi":"10.1007/s43681-025-00883-6","DOIUrl":"10.1007/s43681-025-00883-6","url":null,"abstract":"<div><p>Recent academic discourse about artificial intelligence (AI) has largely been directed at how to best morally program AI or evaluating the ethics of its use in various contexts. While these efforts are undoubtedly important, this essay proposes a complementary objective: deploying AI to enhance our own ethical conduct. One way we might do this is by using AI to deepen our understanding of human moral psychology. In this paper, I demonstrate how advanced machine learning might help us gain clearer insights into “common sense” morality—shared moral convictions that underpin our reflective judgments and inform central aspects of moral philosophy. Pinpointing such convictions has proven challenging amid widespread moral disagreements. Current approaches to understanding these commitments, although exhibiting some key strengths, ultimately struggle to capture relevant features of reflective moral judgments espoused by John Rawls, leaving room for methodological improvement. Modern advances in AI offer a promising opportunity to make progress on this task. This essay envisions the gamified training of a “collective moral conscience model,” able to render judgments about moral situations that align with the deep-seated principles of the human collective. I argue that such an AI model might make progress in overcoming obstacles of disagreement to aid in philosophical theorizing and foster practical applications for the moral life of AI agents and ourselves, such as offering us guidance in time-constrained dilemmas and helping us to reflect on our own biases.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion AI in the classroom: ethics of monitoring student affect through facial and vocal analytics 课堂上的情感人工智能:通过面部和声音分析来监控学生情绪的伦理
Pub Date : 2025-12-04 DOI: 10.1007/s43681-025-00897-0
Seyed-Ali Sadegh-Zadeh, Tahereh Movahhedi, Fahimeh Bahonar

Emotion artificial intelligence (AI) technologies are increasingly being introduced into classrooms worldwide, using facial expression analysis and vocal tone analytics to monitor student affect and engagement. Schools and ed-tech companies are piloting systems that promise real-time feedback to teachers, for example, alerting them when students appear confused or disengaged, and adaptive learning experiences tuned to students’ emotional states. However, these developments raise complex ethical questions. This conceptual paper proposes a novel, pragmatic ethical framework for deploying Emotion AI in educational contexts, aiming to balance innovation with safeguarding student rights and well-being. We take a globally scoped perspective, examining international use cases ranging from AI-equipped classrooms in China to experimental pilots in the United States and Europe’s more precautionary regulatory stance. We integrate technical considerations (how these AI systems operate and their limitations), psychological insights (the impact on learning and student mental health), and policy analysis (privacy laws, consent requirements, and cultural norms) into a comprehensive discussion. Key ethical dimensions addressed include privacy and data governance, informed consent (especially for minors), algorithmic bias and fairness, the risk of misinterpreting emotions across diverse cultures, and potential misuse or unintended consequences of constant affective surveillance. Real-world scenarios illustrate both the promise and perils of Emotion AI: for instance, systems that boost student engagement through timely feedback versus dystopian visions of “Big Brother” monitoring every smile or frown. In response, we outline an actionable ethical model, grounded in principles of student autonomy, transparency, equity, and accountability, to guide stakeholders in the responsible implementation of emotional analytics in schools. A summary table of ethical considerations and a framework diagram facilitate practical understanding. Ultimately, this work offers a foundation for future research and policymaking at the intersection of education, AI, and ethics, emphasising that protecting students’ dignity and psychological safety must be paramount as we explore Emotion AI’s educational potential.

情感人工智能(AI)技术越来越多地被引入世界各地的课堂,利用面部表情分析和声调分析来监测学生的影响和参与度。学校和教育科技公司正在试点一些系统,这些系统承诺向教师提供实时反馈,例如,在学生表现出困惑或心不在焉时提醒他们,以及根据学生的情绪状态调整适应性学习体验。然而,这些发展引发了复杂的伦理问题。这篇概念性论文为在教育环境中部署情感人工智能提出了一个新颖、实用的伦理框架,旨在平衡创新与维护学生权利和福祉。我们以全球范围的视角,研究了从中国配备人工智能的教室到美国试点和欧洲更具预防性的监管立场的国际用例。我们将技术考虑(这些人工智能系统如何运作及其局限性)、心理学见解(对学习和学生心理健康的影响)和政策分析(隐私法、同意要求和文化规范)整合到一个全面的讨论中。涉及的关键伦理层面包括隐私和数据治理、知情同意(特别是未成年人)、算法偏见和公平性、不同文化中误解情绪的风险,以及持续情感监控的潜在滥用或意外后果。现实世界的场景既说明了情感人工智能的前景,也说明了它的风险:例如,通过及时反馈提高学生参与度的系统,与监控每个微笑或皱眉的“老大哥”的反乌托邦愿景相比。作为回应,我们概述了一个可操作的道德模型,以学生自主、透明、公平和问责的原则为基础,指导利益相关者在学校负责任地实施情感分析。伦理考虑的汇总表和框架图有助于实际理解。最终,这项工作为未来在教育、人工智能和伦理交叉领域的研究和政策制定奠定了基础,强调在我们探索情感人工智能的教育潜力时,保护学生的尊严和心理安全必须是至关重要的。
{"title":"Emotion AI in the classroom: ethics of monitoring student affect through facial and vocal analytics","authors":"Seyed-Ali Sadegh-Zadeh,&nbsp;Tahereh Movahhedi,&nbsp;Fahimeh Bahonar","doi":"10.1007/s43681-025-00897-0","DOIUrl":"10.1007/s43681-025-00897-0","url":null,"abstract":"<div>\u0000 \u0000 <p>Emotion artificial intelligence (AI) technologies are increasingly being introduced into classrooms worldwide, using facial expression analysis and vocal tone analytics to monitor student affect and engagement. Schools and ed-tech companies are piloting systems that promise real-time feedback to teachers, for example, alerting them when students appear confused or disengaged, and adaptive learning experiences tuned to students’ emotional states. However, these developments raise complex ethical questions. This conceptual paper proposes a novel, pragmatic ethical framework for deploying Emotion AI in educational contexts, aiming to balance innovation with safeguarding student rights and well-being. We take a globally scoped perspective, examining international use cases ranging from AI-equipped classrooms in China to experimental pilots in the United States and Europe’s more precautionary regulatory stance. We integrate technical considerations (how these AI systems operate and their limitations), psychological insights (the impact on learning and student mental health), and policy analysis (privacy laws, consent requirements, and cultural norms) into a comprehensive discussion. Key ethical dimensions addressed include privacy and data governance, informed consent (especially for minors), algorithmic bias and fairness, the risk of misinterpreting emotions across diverse cultures, and potential misuse or unintended consequences of constant affective surveillance. Real-world scenarios illustrate both the promise and perils of Emotion AI: for instance, systems that boost student engagement through timely feedback versus dystopian visions of “Big Brother” monitoring every smile or frown. In response, we outline an actionable ethical model, grounded in principles of student autonomy, transparency, equity, and accountability, to guide stakeholders in the responsible implementation of emotional analytics in schools. A summary table of ethical considerations and a framework diagram facilitate practical understanding. Ultimately, this work offers a foundation for future research and policymaking at the intersection of education, AI, and ethics, emphasising that protecting students’ dignity and psychological safety must be paramount as we explore Emotion AI’s educational potential.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical AI in the workplace: ensuring fairness and transparency 工作场所的道德人工智能:确保公平和透明
Pub Date : 2025-12-01 DOI: 10.1007/s43681-025-00903-5
Mariitta Rauhala, Merja Drake, Pirjo Saaranen

This study investigates how transparency, fairness, and employee participation influence the ethical use and development of artificial intelligence (AI) in the workplace, with a focus on knowledge workers in Finland. Drawing on a survey of 474 respondents, the research explores how these ethical principles contribute to employee engagement and the development of digital and developer agency. The study employs confirmatory factor analysis and path analysis to validate a four-factor model comprising sense of fairness, transparency and involvement, participation and engagement, and development of digital and developer agency. The results show that transparency and fairness significantly enhance employee participation and engagement, which in turn fosters the development of digital and developer agency. The findings highlight the importance of inclusive and transparent AI practices in promoting ethical AI adoption and strengthening professional agency in evolving work environments. The study contributes to the growing body of research on responsible AI by offering empirical evidence on the social and organisational dimensions of AI ethics.

本研究调查了透明度、公平性和员工参与如何影响工作场所人工智能(AI)的道德使用和发展,重点是芬兰的知识型员工。根据对474名受访者的调查,该研究探讨了这些道德原则如何促进员工敬业度以及数字和开发机构的发展。本研究采用验证性因子分析和路径分析,验证了公平感、透明度和参与感、参与和投入感以及数字和开发者机构发展的四因素模型。结果表明,透明度和公平性显著提高了员工的参与度和敬业度,这反过来又促进了数字和开发者机构的发展。研究结果强调了包容和透明的人工智能实践在促进合乎道德的人工智能采用和在不断变化的工作环境中加强专业机构方面的重要性。该研究通过提供人工智能伦理的社会和组织维度的经验证据,为越来越多的负责任的人工智能研究做出了贡献。
{"title":"Ethical AI in the workplace: ensuring fairness and transparency","authors":"Mariitta Rauhala,&nbsp;Merja Drake,&nbsp;Pirjo Saaranen","doi":"10.1007/s43681-025-00903-5","DOIUrl":"10.1007/s43681-025-00903-5","url":null,"abstract":"<div>\u0000 \u0000 <p>This study investigates how transparency, fairness, and employee participation influence the ethical use and development of artificial intelligence (AI) in the workplace, with a focus on knowledge workers in Finland. Drawing on a survey of 474 respondents, the research explores how these ethical principles contribute to employee engagement and the development of digital and developer agency. The study employs confirmatory factor analysis and path analysis to validate a four-factor model comprising sense of fairness, transparency and involvement, participation and engagement, and development of digital and developer agency. The results show that transparency and fairness significantly enhance employee participation and engagement, which in turn fosters the development of digital and developer agency. The findings highlight the importance of inclusive and transparent AI practices in promoting ethical AI adoption and strengthening professional agency in evolving work environments. The study contributes to the growing body of research on responsible AI by offering empirical evidence on the social and organisational dimensions of AI ethics. </p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00903-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1