首页 > 最新文献

AI and ethics最新文献

英文 中文
Against AI ethics: challenging the conventional narratives 反对人工智能伦理:挑战传统叙事
Pub Date : 2026-01-29 DOI: 10.1007/s43681-025-00978-0
Saleh Afroogh, Yasser Pouresmaeil, Amit Dhurandhar

In this paper, we challenge the overreliance on conventional ethical frameworks commonly observed in current AI ethics literature. We begin by surveying the ethical concerns and frameworks that dominate this field. Following this, we categorize and critically review the existing objections to these traditional approaches in terms of conceptual challenges, professional and regulatory challenges, and challenges from practical implementation. Finally, we present three key arguments against conventional ethical frameworks, drawing upon the fact that they fail to preclude ethics washing, stifle innovation in ethical research, and the drastic changes brought about by AI technology in the ethical landscape.

在本文中,我们挑战了对当前人工智能伦理文献中常见的传统伦理框架的过度依赖。我们首先考察主导这一领域的伦理问题和框架。在此之后,我们从概念挑战、专业和监管挑战以及实际实施挑战等方面对这些传统方法的现有反对意见进行了分类和批判性审查。最后,我们提出了反对传统伦理框架的三个关键论点,因为它们未能阻止伦理清洗,扼杀伦理研究中的创新,以及人工智能技术在伦理领域带来的巨大变化。
{"title":"Against AI ethics: challenging the conventional narratives","authors":"Saleh Afroogh,&nbsp;Yasser Pouresmaeil,&nbsp;Amit Dhurandhar","doi":"10.1007/s43681-025-00978-0","DOIUrl":"10.1007/s43681-025-00978-0","url":null,"abstract":"<div>\u0000 \u0000 <p>In this paper, we challenge the overreliance on conventional ethical frameworks commonly observed in current AI ethics literature. We begin by surveying the ethical concerns and frameworks that dominate this field. Following this, we categorize and critically review the existing objections to these traditional approaches in terms of conceptual challenges, professional and regulatory challenges, and challenges from practical implementation. Finally, we present three key arguments against conventional ethical frameworks, drawing upon the fact that they fail to preclude ethics washing, stifle innovation in ethical research, and the drastic changes brought about by AI technology in the ethical landscape.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Justification optional: ChatGPT’s advice can still influence human judgments about moral dilemmas 辩解可选:ChatGPT的建议仍然可以影响人们对道德困境的判断
Pub Date : 2026-01-28 DOI: 10.1007/s43681-026-01005-6
Sebastian Krügel, Andreas Ostermaier, Matthias Uhl

Why do users follow moral advice from chatbots? Arguably, a chatbot is not an authoritative moral advisor, but it can generate plausible arguments. We conducted a large pre-registered vignette experiment (N = 1269) that controlled for the effect that ethical justification has on subjects’ propensity to accept the chatbot’s advice. Furthermore, to study the influence of the source of the advice, we manipulated subjects’ belief of being advised by a chatbot or another human. In our experiment, we find that users did not accept reasoned more readily than unreasoned advice. However, this was also true if we attributed advice to a moral advisor, not a chatbot. Hence, we suggest that advice might offer users an easy way to escape from a moral dilemma. This is a concern that chatbots do not raise, but they may exacerbate it as they make advice on versatile issues easily accessible. We conclude that it may take ethical in addition to digital literacy to protect users against moral advice from chatbots.

为什么用户会听从聊天机器人的道德建议?可以说,聊天机器人不是权威的道德顾问,但它可以提出合理的论点。我们进行了一项大型预注册小插图实验(N = 1269),以控制道德辩护对受试者接受聊天机器人建议的倾向的影响。此外,为了研究建议来源的影响,我们操纵了受试者对被聊天机器人或另一个人建议的信念。在我们的实验中,我们发现用户不接受合理的建议比不合理的建议容易。然而,如果我们把建议归因于道德顾问,而不是聊天机器人,这也是正确的。因此,我们认为建议可能会为用户提供一种摆脱道德困境的简单方法。这是一个聊天机器人没有提出的问题,但它们可能会加剧这个问题,因为它们对各种问题的建议都很容易获得。我们的结论是,除了数字素养之外,还需要道德素养来保护用户免受聊天机器人的道德建议。
{"title":"Justification optional: ChatGPT’s advice can still influence human judgments about moral dilemmas","authors":"Sebastian Krügel,&nbsp;Andreas Ostermaier,&nbsp;Matthias Uhl","doi":"10.1007/s43681-026-01005-6","DOIUrl":"10.1007/s43681-026-01005-6","url":null,"abstract":"<div><p>Why do users follow moral advice from chatbots? Arguably, a chatbot is not an authoritative moral advisor, but it can generate plausible arguments. We conducted a large pre-registered vignette experiment (<i>N</i> = 1269) that controlled for the effect that ethical justification has on subjects’ propensity to accept the chatbot’s advice. Furthermore, to study the influence of the source of the advice, we manipulated subjects’ belief of being advised by a chatbot or another human. In our experiment, we find that users did not accept reasoned more readily than unreasoned advice. However, this was also true if we attributed advice to a moral advisor, not a chatbot. Hence, we suggest that advice might offer users an easy way to escape from a moral dilemma. This is a concern that chatbots do not raise, but they may exacerbate it as they make advice on versatile issues easily accessible. We conclude that it may take ethical in addition to digital literacy to protect users against moral advice from chatbots.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-026-01005-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can we automate philosophy through AI? And should we want to? 我们能通过人工智能自动化哲学吗?我们应该这样做吗?
Pub Date : 2026-01-28 DOI: 10.1007/s43681-025-00960-w
Thomas J. Spiegel

Academic philosophers sometimes quip that in the future the only job safe from automation will be that of philosophy professors. However, the current AI revolution has inspired some AI scholars to propose the future establishment of closed-loop AI systems as a type of superintelligent robot that would essentially outperform and replace human scientists Zenil in the future of fundamental science led by generative closed-loop artificial intelligence, arXiv:2307.07522v3, 1-40, 2023; Schmidt in Mach Learn Sci Technol 5(3): 035045, (2024); Kitano in Npj Syst Bio Appl 7: 1–12, (2021). In this paper, I investigate whether, analogously, academic philosophy could be automated by putative, sufficiently advanced future AI, potentially featuring artificial embodiment as robots. To this end, I distinguish two mutually exclusive metaphilosophical conceptions of the nature of philosophy circumscribed as philosophy as a set of propositions (PP) versus philosophy as an activity (PA). Granting AI proponents that – iff artificial general intelligence (AGI), potentially embodied as superintelligent robots is achieved – natural sciences may be fully automated in the future, I argue for the conditional that if PP is true (but not if PA is true), then it is possible that AI can automate philosophy. Additionally, I consider what it would mean to automate philosophy given the current state of LLMs (e.g., the GPT-5 era). Finally, I briefly consider whether it would be preferable for us to have philosophy automated and argue that there are two prima facie reasons why automating philosophy, if possible, might be undesirable: the reason from obsolescence and the reason from ultimate answers.

学院派哲学家有时会打趣说,未来唯一不会被自动化取代的工作将是哲学教授。然而,当前的人工智能革命激发了一些人工智能学者提出,未来建立闭环人工智能系统,作为一种超级智能机器人,在以生成闭环人工智能为主导的基础科学的未来,它将在本质上超越并取代人类科学家Zenil, arXiv:2307.07522v3, 1- 40,2023;马赫学习中的施密特科学技术5(3):035045,(2024);北野。生物工程学报,7:1-12,(2021)。在本文中,我研究了类似地,学术哲学是否可以通过假定的、足够先进的未来人工智能来自动化,潜在地以机器人的人工化身为特征。为此,我区分了哲学本质的两个相互排斥的形而上学概念,即哲学作为一组命题(PP)与哲学作为一种活动(PA)。人工智能的支持者认为,如果人工通用智能(AGI),可能体现为超级智能机器人,自然科学可能在未来完全自动化,我认为如果PP是真的(但如果PA不是真的),那么人工智能有可能实现哲学的自动化。此外,考虑到法学硕士的当前状态(例如,GPT-5时代),自动化哲学意味着什么。最后,我简要地考虑了我们是否更愿意让哲学自动化,并认为有两个初步的原因,为什么哲学自动化,如果可能的话,可能是不受欢迎的:过时的原因和最终答案的原因。
{"title":"Can we automate philosophy through AI? And should we want to?","authors":"Thomas J. Spiegel","doi":"10.1007/s43681-025-00960-w","DOIUrl":"10.1007/s43681-025-00960-w","url":null,"abstract":"<div><p>Academic philosophers sometimes quip that in the future the only job safe from automation will be that of philosophy professors. However, the current AI revolution has inspired some AI scholars to propose the future establishment of closed-loop AI systems as a type of superintelligent robot that would essentially outperform and replace human scientists Zenil in the future of fundamental science led by generative closed-loop artificial intelligence, arXiv:2307.07522v3, 1-40, 2023; Schmidt in Mach Learn Sci Technol 5(3): 035045, (2024); Kitano in Npj Syst Bio Appl 7: 1–12, (2021). In this paper, I investigate whether, analogously, academic philosophy could be automated by putative, sufficiently advanced future AI, potentially featuring artificial embodiment as robots. To this end, I distinguish two mutually exclusive metaphilosophical conceptions of the nature of philosophy circumscribed as philosophy as a set of propositions (PP) versus philosophy as an activity (PA). Granting AI proponents that – iff artificial general intelligence (AGI), potentially embodied as superintelligent robots is achieved – natural sciences may be fully automated in the future, I argue for the conditional that if PP is true (but not if PA is true), then it is possible that AI can automate philosophy. Additionally, I consider what it would mean to automate philosophy given the current state of LLMs (e.g., the GPT-5 era). Finally, I briefly consider whether it would be preferable for us to have philosophy automated and argue that there are two <i>prima facie</i> reasons why automating philosophy, if possible, might be undesirable: the reason from obsolescence and the reason from ultimate answers.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Legitimate expectations in the age of innovation 创新时代的合理期望
Pub Date : 2026-01-28 DOI: 10.1007/s43681-025-00980-6
Brian Kogelmann, Jeffrey Carroll

Artificial intelligence is widely expected to intensify and accelerate creative destruction, resulting in displaced workers, disrupted business models, and obsolete products. Does this violate individuals’ legitimate expectations? If so, it would provide a powerful argument for blocking innovation or compensating its losers. We argue it does not. This conclusion does not depend on rejecting the doctrine of legitimate expectations. Rather, we show that the distinctive features of the market process mean that expectations about markets are rarely legitimate in the first place. As a result, the doctrine of legitimate expectations cannot justify compensating the losers of creative destruction—an implication of particular importance for current debates about AI.

人们普遍预计,人工智能将加剧和加速创造性破坏,导致工人失业,商业模式被颠覆,产品过时。这是否违反了个人的合理期望?如果是这样,它将为阻止创新或补偿输家提供有力的论据。我们认为并非如此。这个结论并不依赖于拒绝合法期望的原则。相反,我们表明,市场过程的独特特征意味着,对市场的预期很少首先是合法的。因此,合理期望的原则不能证明补偿创造性破坏的失败者是合理的——这对当前关于人工智能的辩论具有特别重要的意义。
{"title":"Legitimate expectations in the age of innovation","authors":"Brian Kogelmann,&nbsp;Jeffrey Carroll","doi":"10.1007/s43681-025-00980-6","DOIUrl":"10.1007/s43681-025-00980-6","url":null,"abstract":"<div><p>Artificial intelligence is widely expected to intensify and accelerate creative destruction, resulting in displaced workers, disrupted business models, and obsolete products. Does this violate individuals’ legitimate expectations? If so, it would provide a powerful argument for blocking innovation or compensating its losers. We argue it does not. This conclusion does not depend on rejecting the doctrine of legitimate expectations. Rather, we show that the distinctive features of the market process mean that expectations about markets are rarely legitimate in the first place. As a result, the doctrine of legitimate expectations cannot justify compensating the losers of creative destruction—an implication of particular importance for current debates about AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dehumanising education: AI and the capitalist capture of teaching 非人化教育:人工智能与资本主义对教育的攫取
Pub Date : 2026-01-28 DOI: 10.1007/s43681-026-01006-5
Ahmet Küçükuncular

The integration of AI into education is often framed as a neutral or beneficial response to pressures of efficiency, scalability, and personalisation. In this paper, I challenge that framing by examining how educational AI reshapes teaching as a form of labour. Drawing on Karl Marx’s theory of alienation, I offer a conceptual analysis of how AI mediated systems reorganise pedagogical work in ways that risk estranging teachers from the products of their labour, the labour process itself, their species being, and their relationships with students and colleagues. Rather than treating AI in education as a monolithic phenomenon, I differentiate between generative tools, automated assessment, learning analytics, and administrative systems, showing how each participates differently in processes of standardisation, surveillance, and managerial control. I situate educational AI within wider dynamics of platform capitalism, datafication, and audit culture, arguing that alienation is not an inevitable outcome of technology but a contingent effect of ownership structures, governance arrangements, and institutional imperatives. I conclude by outlining policy, design, and philosophical interventions aimed at reducing alienation, while acknowledging the limits of reform within marketised education systems. In doing so, I reframe AI in education as a political and ethical question about labour, authority, and the purpose of teaching, rather than a purely technical innovation.

人工智能与教育的整合通常被认为是对效率、可扩展性和个性化压力的一种中立或有益的回应。在本文中,我通过研究教育人工智能如何将教学重塑为一种劳动形式来挑战这种框架。借鉴卡尔·马克思(Karl Marx)的异化理论,我对人工智能介导的系统如何重组教学工作进行了概念性分析,这些系统可能会使教师与他们的劳动产品、劳动过程本身、他们的物种存在以及他们与学生和同事的关系疏远。我没有将教育中的人工智能视为一个整体现象,而是区分了生成工具、自动评估、学习分析和管理系统,展示了它们如何以不同的方式参与标准化、监督和管理控制的过程。我将教育人工智能置于平台资本主义、数据化和审计文化的更广泛动态中,认为异化不是技术的必然结果,而是所有权结构、治理安排和制度要求的偶然影响。最后,我概述了旨在减少疏离感的政策、设计和哲学干预,同时承认市场化教育体系内改革的局限性。在此过程中,我将教育中的人工智能重新定义为一个关于劳动力、权威和教学目的的政治和伦理问题,而不是纯粹的技术创新。
{"title":"Dehumanising education: AI and the capitalist capture of teaching","authors":"Ahmet Küçükuncular","doi":"10.1007/s43681-026-01006-5","DOIUrl":"10.1007/s43681-026-01006-5","url":null,"abstract":"<div>\u0000 \u0000 <p>The integration of AI into education is often framed as a neutral or beneficial response to pressures of efficiency, scalability, and personalisation. In this paper, I challenge that framing by examining how educational AI reshapes teaching as a form of labour. Drawing on Karl Marx’s theory of alienation, I offer a conceptual analysis of how AI mediated systems reorganise pedagogical work in ways that risk estranging teachers from the products of their labour, the labour process itself, their species being, and their relationships with students and colleagues. Rather than treating AI in education as a monolithic phenomenon, I differentiate between generative tools, automated assessment, learning analytics, and administrative systems, showing how each participates differently in processes of standardisation, surveillance, and managerial control. I situate educational AI within wider dynamics of platform capitalism, datafication, and audit culture, arguing that alienation is not an inevitable outcome of technology but a contingent effect of ownership structures, governance arrangements, and institutional imperatives. I conclude by outlining policy, design, and philosophical interventions aimed at reducing alienation, while acknowledging the limits of reform within marketised education systems. In doing so, I reframe AI in education as a political and ethical question about labour, authority, and the purpose of teaching, rather than a purely technical innovation.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating transformer models for textual bias detection in model, data, and dataspace cards 研究变压器模型在模型、数据和数据空间卡中的文本偏差检测
Pub Date : 2026-01-28 DOI: 10.1007/s43681-025-00975-3
Andy Donald, Apostolos Galanopoulos, Atul Kumar Ojha, Edward Curry, Emir Muñoz, Ihsan Ullah, John P. McCrae, Manan Kalra, Sagar Saxena, Talha Iqbal

Identifying hidden biases in AI documentation metadata (model, data, and dataspace cards) is essential for responsible AI; yet this domain remains largely unexplored. The proposed work evaluates four Transformer models (XLNet, DistilBERT, RoBERTa, and ELECTRA) for bias detection across publicly available, synthetic, and custom datasets. On the BABE news corpus, all models achieved 77–80% accuracy, with only ELECTRA exceeding 80% on every metric. To address the absence of publicly available AI-card datasets, we generated synthetic metadata for two use cases (Customer Interaction and Customer Data Uploaded by Organisations) using ChatGPT. Models trained on this synthetic corpus displayed near-perfect scores, reflecting shared stylistic cues embedded in the generated text. To test real-world robustness, we curated a Hugging Face dataset by scraping documentation comments, filtering for bias-related keywords, and obtaining annotations from four independent labellers in a single-blind setting. Partial fine-tuning (zero-shot) evaluations of models trained only on BABE or synthetic data revealed substantial performance drops on this real-world set. To mitigate this cross-domain loss, we introduce a cascaded, full fine-tuning (few-shot) pipeline in which Transformer models are sequentially fine-tuned on BABE, synthetic text, and a subset of the Hugging Face corpus. Evaluation on the remaining portion achieved over 85% across all performance metrics, enhancing precision and generalisation. This study demonstrates the challenges of bias detection beyond controlled or synthetic data and highlights cascaded fine-tuning as a practical, low-resource strategy. Future directions include leveraging evidence fusion methods, integrating cross-attention with bias taxonomies, and adopting dual-encoder architectures to advance bias detection toward more in-depth, knowledge-guided reasoning.

识别人工智能文档元数据(模型、数据和数据空间卡)中隐藏的偏见对于负责任的人工智能至关重要;然而,这一领域在很大程度上仍未被探索。建议的工作评估了四个Transformer模型(XLNet、DistilBERT、RoBERTa和ELECTRA),以便在公共可用、合成和自定义数据集之间进行偏差检测。在BABE新闻语料库上,所有模型都达到了77-80%的准确率,只有ELECTRA在每个指标上都超过了80%。为了解决缺乏公开可用的ai卡数据集的问题,我们使用ChatGPT为两个用例(客户交互和组织上传的客户数据)生成了合成元数据。在这个合成语料库上训练的模型显示出近乎完美的分数,反映了嵌入在生成文本中的共享风格线索。为了测试现实世界的鲁棒性,我们通过抓取文档评论、过滤与偏见相关的关键字,并在单盲设置下从四个独立标签器获取注释,来策划一个hug Face数据集。仅在BABE或合成数据上训练的模型的部分微调(零射击)评估显示,在这个真实世界的集合上,性能会大幅下降。为了减轻这种跨域损失,我们引入了一个级联的、完整的微调(少量)管道,其中Transformer模型在BABE、合成文本和hug Face语料库的一个子集上依次进行微调。对剩余部分的评估在所有性能指标中达到85%以上,提高了精度和通用性。本研究展示了超出控制或合成数据的偏差检测的挑战,并强调级联微调是一种实用的低资源策略。未来的方向包括利用证据融合方法,将交叉注意与偏见分类相结合,以及采用双编码器架构来推进偏见检测,使其朝着更深入、知识引导的推理方向发展。
{"title":"Investigating transformer models for textual bias detection in model, data, and dataspace cards","authors":"Andy Donald,&nbsp;Apostolos Galanopoulos,&nbsp;Atul Kumar Ojha,&nbsp;Edward Curry,&nbsp;Emir Muñoz,&nbsp;Ihsan Ullah,&nbsp;John P. McCrae,&nbsp;Manan Kalra,&nbsp;Sagar Saxena,&nbsp;Talha Iqbal","doi":"10.1007/s43681-025-00975-3","DOIUrl":"10.1007/s43681-025-00975-3","url":null,"abstract":"<div><p>Identifying hidden biases in AI documentation metadata (model, data, and dataspace cards) is essential for responsible AI; yet this domain remains largely unexplored. The proposed work evaluates four Transformer models (XLNet, DistilBERT, RoBERTa, and ELECTRA) for bias detection across publicly available, synthetic, and custom datasets. On the BABE news corpus, all models achieved 77–80% accuracy, with only ELECTRA exceeding 80% on every metric. To address the absence of publicly available AI-card datasets, we generated synthetic metadata for two use cases (<i>Customer Interaction and Customer Data Uploaded by Organisations</i>) using ChatGPT. Models trained on this synthetic corpus displayed near-perfect scores, reflecting shared stylistic cues embedded in the generated text. To test real-world robustness, we curated a Hugging Face dataset by scraping documentation comments, filtering for bias-related keywords, and obtaining annotations from four independent labellers in a single-blind setting. Partial fine-tuning (zero-shot) evaluations of models trained only on BABE or synthetic data revealed substantial performance drops on this real-world set. To mitigate this cross-domain loss, we introduce a cascaded, full fine-tuning (few-shot) pipeline in which Transformer models are sequentially fine-tuned on BABE, synthetic text, and a subset of the Hugging Face corpus. Evaluation on the remaining portion achieved over 85% across all performance metrics, enhancing precision and generalisation. This study demonstrates the challenges of bias detection beyond controlled or synthetic data and highlights cascaded fine-tuning as a practical, low-resource strategy. Future directions include leveraging evidence fusion methods, integrating cross-attention with bias taxonomies, and adopting dual-encoder architectures to advance bias detection toward more in-depth, knowledge-guided reasoning.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00975-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Governing by design: algorithmic normativity, clinical standards, and health policy implications of AI in healthcare 设计治理:人工智能在医疗保健中的算法规范、临床标准和卫生政策影响
Pub Date : 2026-01-28 DOI: 10.1007/s43681-026-01002-9
Ali Asadollahi

To examine how clinical AI systems establish de facto standards of care through design and administrative mechanisms; analyze power redistribution across stakeholders in EU, US, and UK governance regimes; and propose comprehensive policy tools to mitigate unintended normative effects while enhancing equity, accountability, and public trust in healthcare AI systems. A comparative qualitative document analysis was conducted on regulatory texts (e.g., the EU AI Act, FDA guidance), hospital protocols, and vendor materials (2015–2025) from the EU, the US, and the UK. The NORM5 typology (Nudges, Override friction, Responsibility choreography, Metric coupling, Scripted workflows) was developed through systematic inductive coding, informed by political science theories of institutional drift and bureaucratic power. Data collection involved systematic sampling of 127 documents across three jurisdictions, with thematic analysis conducted using established qualitative research protocols. NORM5 mechanisms subtly shift clinical norms through five distinct pathways, redistributing power toward vendors, payers, and healthcare organizations while systematically eroding clinician autonomy. The EU’s risk-based regulatory framework formalizes compliance burdens but enables systematic oversight; the US’s fragmented incentive structures promote defensive AI adoption patterns; and the UK’s polycentric governance approach supports coordinated policy responses. Analysis reveals significant societal risks including algorithmic bias amplification, professional autonomy erosion, and reduced public trust in healthcare systems. Algorithmic normativity governs healthcare by design, necessitating ethical governance to balance innovation with human-centered values. The NORM5 typology and policy toolkit offer actionable pathways for responsible AI governance.

研究临床人工智能系统如何通过设计和管理机制建立事实上的护理标准;分析欧盟、美国和英国治理机制中利益相关者之间的权力再分配;并提出全面的政策工具,以减轻意想不到的规范影响,同时增强医疗人工智能系统的公平性、问责制和公众信任。对来自欧盟、美国和英国的监管文本(例如欧盟人工智能法案、FDA指南)、医院协议和供应商材料(2015-2025)进行了比较定性的文件分析。NORM5类型(轻推、覆盖摩擦、责任编排、度量耦合、脚本工作流)是通过系统的归纳编码发展起来的,并受到制度漂移和官僚权力的政治学理论的影响。数据收集涉及对三个司法管辖区的127份文件进行系统抽样,并使用既定的定性研究方案进行专题分析。NORM5机制通过五种不同的途径巧妙地改变了临床规范,将权力重新分配给供应商、支付方和医疗机构,同时系统地侵蚀了临床医生的自主权。欧盟基于风险的监管框架使合规负担正式化,但使系统监督成为可能;美国支离破碎的激励结构促进了防御性人工智能采用模式;英国的多中心治理方式支持协调一致的政策应对。分析揭示了重大的社会风险,包括算法偏见放大,专业自主权侵蚀以及公众对医疗保健系统的信任降低。算法规范通过设计来管理医疗保健,需要道德治理来平衡创新与以人为本的价值观。NORM5类型和策略工具包为负责任的人工智能治理提供了可行的途径。
{"title":"Governing by design: algorithmic normativity, clinical standards, and health policy implications of AI in healthcare","authors":"Ali Asadollahi","doi":"10.1007/s43681-026-01002-9","DOIUrl":"10.1007/s43681-026-01002-9","url":null,"abstract":"<div><p>To examine how clinical AI systems establish de facto standards of care through design and administrative mechanisms; analyze power redistribution across stakeholders in EU, US, and UK governance regimes; and propose comprehensive policy tools to mitigate unintended normative effects while enhancing equity, accountability, and public trust in healthcare AI systems. A comparative qualitative document analysis was conducted on regulatory texts (e.g., the EU AI Act, FDA guidance), hospital protocols, and vendor materials (2015–2025) from the EU, the US, and the UK. The NORM5 typology (Nudges, Override friction, Responsibility choreography, Metric coupling, Scripted workflows) was developed through systematic inductive coding, informed by political science theories of institutional drift and bureaucratic power. Data collection involved systematic sampling of 127 documents across three jurisdictions, with thematic analysis conducted using established qualitative research protocols. NORM5 mechanisms subtly shift clinical norms through five distinct pathways, redistributing power toward vendors, payers, and healthcare organizations while systematically eroding clinician autonomy. The EU’s risk-based regulatory framework formalizes compliance burdens but enables systematic oversight; the US’s fragmented incentive structures promote defensive AI adoption patterns; and the UK’s polycentric governance approach supports coordinated policy responses. Analysis reveals significant societal risks including algorithmic bias amplification, professional autonomy erosion, and reduced public trust in healthcare systems. Algorithmic normativity governs healthcare by design, necessitating ethical governance to balance innovation with human-centered values. The NORM5 typology and policy toolkit offer actionable pathways for responsible AI governance.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures 人工智能伦理概述:通过原则、生活现实和权力结构的视角来关注道德问题
Pub Date : 2026-01-28 DOI: 10.1007/s43681-025-00955-7
Elizabeth Liz M. Groen, Tamar Sharon, Marcel Becker

Along with the rapid development of AI systems, the literature addressing the moral concerns raised by AI—stemming from different directions like computer science, medicine, and philosophy—has substantially grown. In solely focusing on AI ethics principles and guidelines, most overviews of the field adopt a principle-based understanding of these moral concerns. However, as our review illuminates, there is more richness and diversity in the current body of literature than this dominant principle-based approach suggests. Within this vast literature, we identify three approaches by which authors tend to formulate the moral concerns raised by AI: principles, lived realities, and power structures. These approaches can be viewed as lenses through which authors investigate the field, and which each entail specific theoretical sensitivities, disciplinary traditions, and methodologies, and hence, specific strengths and weaknesses. The first “principle-based” approach takes moral concerns to be universal, stable, and fixed principles; which are globally shared, may travel between contexts and are often predetermined. What we call the “lived realities” approach foregrounds the interaction between people and AI systems, focusing on local practices and everyday experiences, generally on a micro-level. Thirdly, what we call the “power structures” approach argues for the need to account for the cultural, social, political, and economic context of AI development, hence human-AI interactions at a macro-level. In bringing together different moral frameworks, traditions, and questions, our structure may serve as a bridge for comparing AI ethics with other areas of applied ethics—considering AI systems are quickly integrated into different spheres of society.

随着人工智能系统的快速发展,涉及人工智能引发的道德问题的文献(来自计算机科学、医学和哲学等不同方向)大幅增长。在仅仅关注人工智能伦理原则和指导方针的情况下,该领域的大多数概述采用了基于原则的理解这些道德问题。然而,正如我们的回顾所阐明的,在当前的文献中,有比这种主要的基于原则的方法所表明的更丰富和多样性。在这些庞大的文献中,我们确定了作者倾向于阐述人工智能引发的道德问题的三种方法:原则、生活现实和权力结构。这些方法可以被看作是作者研究该领域的透镜,每个方法都包含特定的理论敏感性、学科传统和方法,因此也有特定的优势和劣势。第一种“以原则为基础”的方法将道德关切视为普遍、稳定和固定的原则;它们是全球共享的,可能在不同的环境中传播,而且往往是预先确定的。我们所说的“生活现实”方法强调人与人工智能系统之间的互动,关注当地的实践和日常体验,通常是在微观层面上。第三,我们称之为“权力结构”的方法认为,需要考虑人工智能发展的文化、社会、政治和经济背景,因此需要在宏观层面上考虑人类与人工智能的互动。在汇集不同的道德框架、传统和问题的过程中,我们的结构可以作为将人工智能伦理与其他应用伦理学领域进行比较的桥梁——考虑到人工智能系统迅速融入社会的不同领域。
{"title":"An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures","authors":"Elizabeth Liz M. Groen,&nbsp;Tamar Sharon,&nbsp;Marcel Becker","doi":"10.1007/s43681-025-00955-7","DOIUrl":"10.1007/s43681-025-00955-7","url":null,"abstract":"<div><p>Along with the rapid development of AI systems, the literature addressing the moral concerns raised by AI—stemming from different directions like computer science, medicine, and philosophy—has substantially grown. In solely focusing on AI ethics principles and guidelines, most overviews of the field adopt a principle-based understanding of these moral concerns. However, as our review illuminates, there is more richness and diversity in the current body of literature than this dominant principle-based approach suggests. Within this vast literature, we identify three approaches by which authors tend to formulate the moral concerns raised by AI: principles, lived realities, and power structures. These approaches can be viewed as lenses through which authors investigate the field, and which each entail specific theoretical sensitivities, disciplinary traditions, and methodologies, and hence, specific strengths and weaknesses. The first “principle-based” approach takes moral concerns to be universal, stable, and fixed principles; which are globally shared, may travel between contexts and are often predetermined. What we call the “lived realities” approach foregrounds the interaction between people and AI systems, focusing on local practices and everyday experiences, generally on a micro-level. Thirdly, what we call the “power structures” approach argues for the need to account for the cultural, social, political, and economic context of AI development, hence human-AI interactions at a macro-level. In bringing together different moral frameworks, traditions, and questions, our structure may serve as a bridge for comparing AI ethics with other areas of applied ethics—considering AI systems are quickly integrated into different spheres of society.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00955-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Educational ideals affect AI acceptance in learning environments 教育理想影响人工智能在学习环境中的接受程度
Pub Date : 2026-01-23 DOI: 10.1007/s43681-025-00979-z
Florian Richter, Matthias Uhl

AI is increasingly used in learning environments to monitor, test, and educate students and allow them to take more individualized learning paths. The success of AI in education will, however, require the acceptance of this technology by university management, faculty, and students. This acceptance will depend on the added value that stakeholders ascribe to this technology. In two empirical studies, we investigate the hitherto neglected question of which impact educational ideals have on the acceptance of AI in learning environments. We find clear evidence for our study participants’ conviction that humanistic educational ideals are considered less suitable for implementing AI in education than compentence-based ideals. This implies that research on the influence of teaching and learning philosophies could be an enlightening component of a comprehensive research program on human-AI interaction in educational contexts.

人工智能越来越多地用于学习环境中,以监控、测试和教育学生,并允许他们采取更个性化的学习路径。然而,人工智能在教育领域的成功需要大学管理层、教师和学生接受这项技术。这种接受将取决于涉众赋予这项技术的附加价值。在两项实证研究中,我们调查了迄今为止被忽视的问题,即教育理想对学习环境中人工智能的接受程度有何影响。我们发现了明确的证据,证明了研究参与者的信念,即人文教育理想被认为比基于能力的理想更不适合在教育中实施人工智能。这意味着,对教学哲学影响的研究可以成为教育背景下人类与人工智能互动综合研究计划的一个启发性组成部分。
{"title":"Educational ideals affect AI acceptance in learning environments","authors":"Florian Richter,&nbsp;Matthias Uhl","doi":"10.1007/s43681-025-00979-z","DOIUrl":"10.1007/s43681-025-00979-z","url":null,"abstract":"<div><p>AI is increasingly used in learning environments to monitor, test, and educate students and allow them to take more individualized learning paths. The success of AI in education will, however, require the acceptance of this technology by university management, faculty, and students. This acceptance will depend on the added value that stakeholders ascribe to this technology. In two empirical studies, we investigate the hitherto neglected question of which impact educational ideals have on the acceptance of AI in learning environments. We find clear evidence for our study participants’ conviction that humanistic educational ideals are considered less suitable for implementing AI in education than compentence-based ideals. This implies that research on the influence of teaching and learning philosophies could be an enlightening component of a comprehensive research program on human-AI interaction in educational contexts.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00979-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146027307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From functional possession to fictional access in the digital age: a starting point for an OnlyFans philosophy 数字时代从功能性占有到虚构的访问:OnlyFans哲学的起点
Pub Date : 2026-01-23 DOI: 10.1007/s43681-025-00900-8
Cristiano Calì

The human propensity for possession and ownership has been a defining characteristic throughout history, influencing the structuring of societies. The advent of digital technology, however, challenges traditional notions of possession and necessitates a redefinition of the concept. This paper aims to explore the resignification of possession in the digital age through a philosophical reflection based on an anthropological analysis. It contrasts two paradigmatic forms of possession: the material collection in the ‘thing’ civilization and the data-based access in the digital civilization. Ultimately, the research shows, analyzing the OnlyFans platform model, how the digital age has fundamentally changed the human mode of possession, demanding a new paradigm in both virtual and augmented reality, while emphasizing the irreplaceable material reference when it comes to the intimate aspects of Homo Sapiens. The transition from material collection to digital access signifies a profound change in human self-perception and social structures and reflects the general cultural and existential changes triggered by technological progress.

人类对占有和所有权的倾向一直是历史上的一个决定性特征,影响着社会的结构。然而,数字技术的出现挑战了传统的占有观念,需要重新定义这个概念。本文旨在通过基于人类学分析的哲学反思来探讨数字时代的占有再认命问题。对比了两种典型的占有形式:“物”文明中的物质收集和数字文明中的基于数据的获取。最后,通过分析OnlyFans平台模型,该研究表明,数字时代如何从根本上改变了人类的占有模式,在虚拟现实和增强现实中都需要一个新的范式,同时强调当涉及到智人的亲密方面时,不可替代的物质参考。从材料收集到数字访问的转变标志着人类自我认知和社会结构的深刻变化,反映了技术进步引发的普遍文化和存在变化。
{"title":"From functional possession to fictional access in the digital age: a starting point for an OnlyFans philosophy","authors":"Cristiano Calì","doi":"10.1007/s43681-025-00900-8","DOIUrl":"10.1007/s43681-025-00900-8","url":null,"abstract":"<div><p>The human propensity for possession and ownership has been a defining characteristic throughout history, influencing the structuring of societies. The advent of digital technology, however, challenges traditional notions of possession and necessitates a redefinition of the concept. This paper aims to explore the resignification of possession in the digital age through a philosophical reflection based on an anthropological analysis. It contrasts two paradigmatic forms of possession: the material collection in the ‘thing’ civilization and the data-based access in the digital civilization. Ultimately, the research shows, analyzing the OnlyFans platform model, how the digital age has fundamentally changed the human mode of possession, demanding a new paradigm in both virtual and augmented reality, while emphasizing the irreplaceable material reference when it comes to the intimate aspects of Homo Sapiens. The transition from material collection to digital access signifies a profound change in human self-perception and social structures and reflects the general cultural and existential changes triggered by technological progress.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00900-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146027412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI and ethics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1