首页 > 最新文献

AI & Society最新文献

英文 中文
A methodology for ethical decision-making in automated vehicles 自动驾驶车辆的道德决策方法
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-30 DOI: 10.1007/s00146-025-02370-2
Chloe Gros, Peter Werkhoven, Leon Kester, Marieke Martens

Despite significant advancements in AI and automated driving, a robust ethical framework for AV decision-making remains undeveloped. Such a framework requires clearly defined moral attributes to guide AVs in evaluating complex and ethically sensitive scenarios. Existing frameworks often rely on a single normative ethical theory, limiting their ability to address the nuanced nature of human decision-making and leading to conflicting outcomes. Augmented Utilitarianism (AU) offers a promising alternative by integrating elements of virtue ethics, deontology, and consequentialism into a non-normative framework. Grounded in moral psychology and neuroscience, AU employs mathematical ethical goal functions to capture societally aligned attributes. In this study, we propose and evaluate a method to elicit these attributes for AV decision-making. One hundred participants were presented with traffic scenarios, including critical and non-critical situations, and tasked with evaluating the relevance of an initial set of 11 attributes (e.g., physical harm, psychological harm, and moral responsibility) while suggesting additional relevant attributes. Results identified two new attributes—environmental harm and energy efficiency—and revealed that four attributes (physical harm, psychological harm, legality of the AV, and self-preservation) varied significantly between critical and non-critical scenarios. These findings suggest that the weight of attributes in ethical goal functions may need to adapt to situational criticality. The method was validated based on key evaluation criteria: it demonstrated sensitivity by producing varying relevance scores for attributes, was deemed relevant by participants for eliciting AV decision-making attributes, and allowed for the identification of additional attributes, enhancing the robustness of the framework. This work contributes to the development of a dynamic and context-sensitive ethical framework for AV decision-making.

尽管人工智能和自动驾驶取得了重大进展,但无人驾驶决策的健全道德框架仍未形成。这样的框架需要明确定义道德属性,以指导自动驾驶汽车评估复杂和道德敏感的场景。现有的框架往往依赖于单一的规范伦理理论,限制了它们解决人类决策微妙本质的能力,并导致相互矛盾的结果。增强功利主义(AU)通过将美德伦理、义务论和结果主义的要素整合到非规范性框架中,提供了一个有希望的选择。以道德心理学和神经科学为基础,AU采用数学伦理目标函数来捕捉社会一致的属性。在本研究中,我们提出并评估了一种提取这些属性用于AV决策的方法。100名参与者被提供了交通场景,包括危急和非危急情况,并被要求评估11个初始属性(如身体伤害、心理伤害和道德责任)的相关性,同时提出其他相关属性。结果确定了两个新属性——环境危害和能源效率,并揭示了四个属性(物理危害、心理危害、自动驾驶的合法性和自我保护)在关键和非关键场景之间存在显著差异。这些发现表明,伦理目标函数中属性的权重可能需要适应情境临界性。该方法基于关键评估标准进行了验证:它通过产生不同属性的相关性分数来证明敏感性,被参与者认为与AV决策属性相关,并允许识别其他属性,增强了框架的鲁棒性。这项工作有助于AV决策的动态和上下文敏感的伦理框架的发展。
{"title":"A methodology for ethical decision-making in automated vehicles","authors":"Chloe Gros,&nbsp;Peter Werkhoven,&nbsp;Leon Kester,&nbsp;Marieke Martens","doi":"10.1007/s00146-025-02370-2","DOIUrl":"10.1007/s00146-025-02370-2","url":null,"abstract":"<div><p>Despite significant advancements in AI and automated driving, a robust ethical framework for AV decision-making remains undeveloped. Such a framework requires clearly defined moral attributes to guide AVs in evaluating complex and ethically sensitive scenarios. Existing frameworks often rely on a single normative ethical theory, limiting their ability to address the nuanced nature of human decision-making and leading to conflicting outcomes. Augmented Utilitarianism (AU) offers a promising alternative by integrating elements of virtue ethics, deontology, and consequentialism into a non-normative framework. Grounded in moral psychology and neuroscience, AU employs mathematical ethical goal functions to capture societally aligned attributes. In this study, we propose and evaluate a method to elicit these attributes for AV decision-making. One hundred participants were presented with traffic scenarios, including critical and non-critical situations, and tasked with evaluating the relevance of an initial set of 11 attributes (e.g., physical harm, psychological harm, and moral responsibility) while suggesting additional relevant attributes. Results identified two new attributes—environmental harm and energy efficiency—and revealed that four attributes (physical harm, psychological harm, legality of the AV, and self-preservation) varied significantly between critical and non-critical scenarios. These findings suggest that the weight of attributes in ethical goal functions may need to adapt to situational criticality. The method was validated based on key evaluation criteria: it demonstrated sensitivity by producing varying relevance scores for attributes, was deemed relevant by participants for eliciting AV decision-making attributes, and allowed for the identification of additional attributes, enhancing the robustness of the framework. This work contributes to the development of a dynamic and context-sensitive ethical framework for AV decision-making.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6245 - 6256"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02370-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracing the bias loop: AI, cultural heritage and bias-mitigating in practice 追踪偏见循环:人工智能、文化遗产和实践中的偏见缓解
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-30 DOI: 10.1007/s00146-025-02349-z
Anna Foka, Gabriele Griffin, Dalia Ortiz Pablo, Paulina Rajkowska, Sushruth Badri

This article investigates the pervasive issue of bias within AI-driven cultural heritage collections, emphasizing how digital technologies both inherit and amplify existing societal and historical prejudices embedded in analogue records. It outlines the multifaceted nature of bias—ranging from data selection and annotation to algorithmic design and user interaction—demonstrating how each stage of the AI pipeline can introduce or perpetuate distortions in representation. Through a critical review of current scholarship and practical case studies, particularly in image classification, the article evaluates technical strategies such as data augmentation, adversarial debiasing, and monitoring plans for bias mitigation. The findings reveal that while methods like noise injection and colour jittering can balance datasets and improve model fairness, effective bias mitigation ultimately depends on interdisciplinary collaboration between heritage professionals, subject experts, and data scientists. The article concludes that addressing bias requires an ongoing, holistic approach, integrating both technical and humanistic perspectives from data collection to model deployment. This ensures more inclusive, accurate, and ethically sound representations of cultural heritage, supporting the sector’s goals of diversity and accessibility for future audiences.

本文调查了人工智能驱动的文化遗产收藏中普遍存在的偏见问题,强调数字技术如何继承和放大嵌入模拟记录中的现有社会和历史偏见。它概述了偏见的多面性——从数据选择和注释到算法设计和用户交互——展示了人工智能管道的每个阶段如何在表示中引入或延续扭曲。通过对当前学术研究和实际案例研究的批判性回顾,特别是在图像分类方面,本文评估了数据增强、对抗性去偏见和监测计划等技术策略,以减轻偏见。研究结果表明,虽然噪声注入和颜色抖动等方法可以平衡数据集并提高模型公平性,但有效的偏见缓解最终取决于遗产专业人员、学科专家和数据科学家之间的跨学科合作。文章的结论是,解决偏见需要一种持续的、整体的方法,从数据收集到模型部署,将技术和人文观点结合起来。这确保了对文化遗产的更包容、更准确、更合乎道德的表述,支持了该部门为未来受众实现多样性和可及性的目标。
{"title":"Tracing the bias loop: AI, cultural heritage and bias-mitigating in practice","authors":"Anna Foka,&nbsp;Gabriele Griffin,&nbsp;Dalia Ortiz Pablo,&nbsp;Paulina Rajkowska,&nbsp;Sushruth Badri","doi":"10.1007/s00146-025-02349-z","DOIUrl":"10.1007/s00146-025-02349-z","url":null,"abstract":"<div><p>This article investigates the pervasive issue of bias within AI-driven cultural heritage collections, emphasizing how digital technologies both inherit and amplify existing societal and historical prejudices embedded in analogue records. It outlines the multifaceted nature of bias—ranging from data selection and annotation to algorithmic design and user interaction—demonstrating how each stage of the AI pipeline can introduce or perpetuate distortions in representation. Through a critical review of current scholarship and practical case studies, particularly in image classification, the article evaluates technical strategies such as data augmentation, adversarial debiasing, and monitoring plans for bias mitigation. The findings reveal that while methods like noise injection and colour jittering can balance datasets and improve model fairness, effective bias mitigation ultimately depends on interdisciplinary collaboration between heritage professionals, subject experts, and data scientists. The article concludes that addressing bias requires an ongoing, holistic approach, integrating both technical and humanistic perspectives from data collection to model deployment. This ensures more inclusive, accurate, and ethically sound representations of cultural heritage, supporting the sector’s goals of diversity and accessibility for future audiences.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5835 - 5847"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02349-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A sociotechnological-system approach to AI ethics 人工智能伦理的社会技术系统方法
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-28 DOI: 10.1007/s00146-025-02335-5
Erich Riesen

AI algorithms require human input to achieve technological aims. This fact is often overlooked in discussions of autonomous systems and AI safety, to the detriment of both philosophical discourse and practical progress. One potential remedy is to ground our theorizing more fundamentally in the idea that AI technologies are sociotechnological systems with human and artifactual components. In this article, I pursue this strategy, aiming to shift the focus in AI ethics from artifacts and their intrinsic properties—what I refer to as the robotic conception of AI—to the relationships among elements embedded in AI-involving sociotechnological systems. First, I defend the claim that the sociotechnological-system perspective provides an accurate description of some of our most advanced AI. Second, I argue that the dominance of the robotic conception has steered AI safety research down unproductive paths, while the sociotechnological perspective has the capacity to set us right. Specifically, the robotic conception encourages the development of artificial moral agents—whose creation we should avoid if possible—and distracts researchers with hypothetical trolley cases. In contrast, the sociotechnological approach coheres with actual progress being made on AI safety (e.g., networking, shared user-artifact control, and value alignment) and makes vivid solutions to the safety problem that do not require the creation of humnanlike moral decision-makers.

人工智能算法需要人工输入来实现技术目标。在关于自主系统和人工智能安全的讨论中,这一事实往往被忽视,不利于哲学论述和实践进步。一种潜在的补救办法是,将我们的理论更根本地建立在人工智能技术是具有人类和人工成分的社会技术系统这一观点之上。在本文中,我追求这一策略,旨在将人工智能伦理的焦点从人工制品及其内在属性(我称之为人工智能的机器人概念)转移到人工智能中嵌入的元素之间的关系,包括社会技术系统。首先,我为以下观点辩护:社会技术系统视角为我们最先进的人工智能提供了准确的描述。其次,我认为,机器人概念的主导地位已经将人工智能安全研究引向了无效的道路,而社会技术的视角有能力让我们纠正错误。具体来说,机器人的概念鼓励了人工道德主体的发展——我们应该尽可能避免创造它们——并且用假设的电车案例分散了研究人员的注意力。相比之下,社会技术方法与人工智能安全方面的实际进展(例如,网络、共享用户-工件控制和价值一致性)相一致,并为安全问题提供了生动的解决方案,而不需要创造类似人类的道德决策者。
{"title":"A sociotechnological-system approach to AI ethics","authors":"Erich Riesen","doi":"10.1007/s00146-025-02335-5","DOIUrl":"10.1007/s00146-025-02335-5","url":null,"abstract":"<div><p>AI algorithms require human input to achieve technological aims. This fact is often overlooked in discussions of autonomous systems and AI safety, to the detriment of both philosophical discourse and practical progress. One potential remedy is to ground our theorizing more fundamentally in the idea that AI technologies are sociotechnological systems with human and artifactual components. In this article, I pursue this strategy, aiming to shift the focus in AI ethics from artifacts and their intrinsic properties—what I refer to as the <i>robotic conception</i> of AI—to the relationships among elements embedded in AI-involving sociotechnological systems. First, I defend the claim that the sociotechnological-system perspective provides an accurate description of some of our most advanced AI. Second, I argue that the dominance of the robotic conception has steered AI safety research down unproductive paths, while the sociotechnological perspective has the capacity to set us right. Specifically, the robotic conception encourages the development of artificial moral agents—whose creation we should avoid if possible—and distracts researchers with hypothetical trolley cases. In contrast, the sociotechnological approach coheres with actual progress being made on AI safety (e.g., networking, shared user-artifact control, and value alignment) and makes vivid solutions to the safety problem that do not require the creation of humnanlike moral decision-makers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6231 - 6243"},"PeriodicalIF":4.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02335-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technologies as “AI Companions”: a call for more inclusive emotional affordance for people with disabilities 作为“人工智能伴侣”的技术:呼吁为残疾人提供更具包容性的情感服务
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-27 DOI: 10.1007/s00146-025-02358-y
Liu Yang
{"title":"Technologies as “AI Companions”: a call for more inclusive emotional affordance for people with disabilities","authors":"Liu Yang","doi":"10.1007/s00146-025-02358-y","DOIUrl":"10.1007/s00146-025-02358-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6481 - 6483"},"PeriodicalIF":4.7,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The tyranny of algorithmic personification and why we must resist it 算法拟人化的暴政,以及为什么我们必须抵制它
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-26 DOI: 10.1007/s00146-025-02359-x
Palanichamy Naveen
{"title":"The tyranny of algorithmic personification and why we must resist it","authors":"Palanichamy Naveen","doi":"10.1007/s00146-025-02359-x","DOIUrl":"10.1007/s00146-025-02359-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6479 - 6480"},"PeriodicalIF":4.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are Turkish pre-service teachers worried about AI? A study on AI anxiety and digital literacy 土耳其职前教师担心人工智能吗?关于人工智能焦虑和数字素养的研究
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-25 DOI: 10.1007/s00146-025-02348-0
Damla Ayduğ, Hakan Altınpulluk

The primary objective of this study is to determine whether the level of digital literacy among pre-service teachers reliably correlates with their anxiety levels concerning artificial intelligence. The study was conducted as a correlational study, with a sample size of 221 pre-service teachers. The study’s population comprised 3922 pre-service teachers enrolled at Turkish state and private universities. To collect study data, the researchers used the “Personal Information Form,” “Digital Literacy Scale,” and “Artificial Intelligence Anxiety Scale.” The data were analyzed using stepwise regression analysis and descriptive statistics. The study’s results indicated that pre-service teachers exhibited high levels of digital literacy and moderate degrees of anxiety regarding artificial intelligence. Regression analysis revealed that 10.3% of pre-service teachers’ anxiety concerning artificial intelligence could be predicted by the technical sub-dimension of digital literacy. Consequently, it was demonstrated that pre-service teachers’ apprehensions regarding artificial intelligence decreased as their perception of technical digital fluency increased. Other sub-dimensions of digital literacy were deemed insignificant in predicting the anxiety levels of pre-service teachers regarding artificial intelligence. Based on these findings, suggestions for future study directions were proposed.

本研究的主要目的是确定职前教师的数字素养水平是否与他们对人工智能的焦虑水平可靠相关。本研究以相关研究的方式进行,样本为221名职前教师。研究对象包括在土耳其国立和私立大学注册的3922名职前教师。为了收集研究数据,研究人员使用了“个人信息表”、“数字素养量表”和“人工智能焦虑量表”。采用逐步回归分析和描述性统计对数据进行分析。研究结果表明,职前教师表现出高水平的数字素养和对人工智能的中度焦虑。回归分析显示,10.3%的职前教师对人工智能的焦虑可以通过数字素养的技术子维度进行预测。因此,研究表明,职前教师对人工智能的担忧随着他们对技术数字流畅性的感知的增加而减少。数字素养的其他子维度在预测职前教师对人工智能的焦虑水平方面被认为是无关紧要的。在此基础上,对今后的研究方向提出了建议。
{"title":"Are Turkish pre-service teachers worried about AI? A study on AI anxiety and digital literacy","authors":"Damla Ayduğ,&nbsp;Hakan Altınpulluk","doi":"10.1007/s00146-025-02348-0","DOIUrl":"10.1007/s00146-025-02348-0","url":null,"abstract":"<div><p>The primary objective of this study is to determine whether the level of digital literacy among pre-service teachers reliably correlates with their anxiety levels concerning artificial intelligence. The study was conducted as a correlational study, with a sample size of 221 pre-service teachers. The study’s population comprised 3922 pre-service teachers enrolled at Turkish state and private universities. To collect study data, the researchers used the “Personal Information Form,” “Digital Literacy Scale,” and “Artificial Intelligence Anxiety Scale.” The data were analyzed using stepwise regression analysis and descriptive statistics. The study’s results indicated that pre-service teachers exhibited high levels of digital literacy and moderate degrees of anxiety regarding artificial intelligence. Regression analysis revealed that 10.3% of pre-service teachers’ anxiety concerning artificial intelligence could be predicted by the technical sub-dimension of digital literacy. Consequently, it was demonstrated that pre-service teachers’ apprehensions regarding artificial intelligence decreased as their perception of technical digital fluency increased. Other sub-dimensions of digital literacy were deemed insignificant in predicting the anxiety levels of pre-service teachers regarding artificial intelligence. Based on these findings, suggestions for future study directions were proposed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5823 - 5834"},"PeriodicalIF":4.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02348-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The digital erosion of intellectual integrity: why misuse of generative AI is worse than plagiarism 数字对知识完整性的侵蚀:为什么滥用生成式人工智能比剽窃更糟糕
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1007/s00146-025-02362-2
David Shaw
{"title":"The digital erosion of intellectual integrity: why misuse of generative AI is worse than plagiarism","authors":"David Shaw","doi":"10.1007/s00146-025-02362-2","DOIUrl":"10.1007/s00146-025-02362-2","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5819 - 5821"},"PeriodicalIF":4.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02362-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Testimony by LLMs 法学硕士证言
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1007/s00146-025-02366-y
Jinhua He, Chen Yang

Artificial testimony generated by large language models (LLMs) can be a source of knowledge. However, the requirement that artificial testifiers must satisfy for successful knowledge acquisition is different from the requirement that human testifiers must satisfy. Correspondingly, the epistemic ground of artificial testimonial knowledge is not the well-known and accepted ones suggested by renowned epistemological theories of (human) testimony. Based on Thomas Reid’s old teaching, we suggest a novel epistemological theory of artificial testimony that for receivers to justifiably believe artificially generated statements, testifiers of the statement should robustly perform the propensities of veracity and cautiousness. The theory transforms the weakness of Reid’s view to an advantage of its own. It sets an achievable standard for LLMs and clarifies the improvement that current LLMs should make for meeting the standard. Moreover, it indicates a pluralistic nature of testimonial justification pertaining to the pluralistic nature of possible testifiers for knowledge transmission.

由大型语言模型(llm)生成的人工证词可以成为知识的来源。然而,人工证人成功获取知识所必须满足的要求与人类证人所必须满足的要求是不同的。相应地,人工证言知识的认识论基础也不是著名的(人类)证言认识论理论所提出的广为人知和被接受的知识基础。基于托马斯·里德的旧教学,我们提出了一种新的人工证词认识论理论,即为了使接受者有理由相信人工生成的陈述,陈述的证人应该强有力地表现出真实性和谨慎性倾向。该理论将里德观点的弱点转化为自身的优势。它为法学硕士设定了一个可实现的标准,并明确了当前法学硕士为达到标准应做出的改进。此外,它表明了与知识传播可能的证人的多元性有关的证明理由的多元性。
{"title":"Testimony by LLMs","authors":"Jinhua He,&nbsp;Chen Yang","doi":"10.1007/s00146-025-02366-y","DOIUrl":"10.1007/s00146-025-02366-y","url":null,"abstract":"<div><p>Artificial testimony generated by large language models (LLMs) can be a source of knowledge. However, the requirement that artificial testifiers must satisfy for successful knowledge acquisition is different from the requirement that human testifiers must satisfy. Correspondingly, the epistemic ground of artificial testimonial knowledge is not the well-known and accepted ones suggested by renowned epistemological theories of (human) testimony. Based on Thomas Reid’s old teaching, we suggest a novel epistemological theory of artificial testimony that for receivers to justifiably believe artificially generated statements, testifiers of the statement should robustly perform the propensities of veracity and cautiousness. The theory transforms the weakness of Reid’s view to an advantage of its own. It sets an achievable standard for LLMs and clarifies the improvement that current LLMs should make for meeting the standard. Moreover, it indicates a pluralistic nature of testimonial justification pertaining to the pluralistic nature of possible testifiers for knowledge transmission.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6201 - 6213"},"PeriodicalIF":4.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02366-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning as machine metis 作为机器的深度学习
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1007/s00146-025-02360-4
Primož Krašovec

This article situates current deep learning (DL) artificial intelligence (AI) within Leroi-Gourhan’s deep history of the human species’ relation to technology. According to Leroi-Gourhan, technology is both a key element of anthropogenesis and a source of later tensions (or disentanglement) between the human species and its external and increasingly autonomous technics. Human organic (life-oriented) intelligence at first extends itself through technical (machine-oriented) intelligence, only to be later left behind by it. We propose a concept of machine intelligence that goes beyond technical intelligence, the latter a (still) hybrid human–machine intelligence. This new, emerging machine intelligence is DL AI. DL AI developed out of the failure of symbolic AI to instantiate a key generic component of intelligence: creativity. While symbolic AI was rigid and pre-programmed, DL is flexible and unpredictable, presenting an embryonic form of actual machine intelligence. Its creativity can be likened to the ancient Greek concept of metis, a cunning and polymorphous form of intelligence. Although often biased and problematic, DL exhibits a machine creativity that goes beyond the anthropocentric imaginings of AI as a (mechanistic) imitation of the human norm.

本文将当前的深度学习(DL)人工智能(AI)置于Leroi-Gourhan关于人类物种与技术关系的深刻历史中。根据勒罗伊-古尔汉的说法,技术既是人类形成的关键因素,也是人类物种与其外部和日益自主的技术之间后来紧张(或解脱)的根源。人类的有机智能(以生命为导向)最初是通过技术智能(以机器为导向)来扩展的,后来却被技术智能甩在了后面。我们提出了一个超越技术智能的机器智能概念,后者(仍然)是人机混合智能。这种新兴的机器智能就是DL AI。深度学习人工智能的发展源于符号人工智能未能实例化智能的一个关键通用组成部分:创造力。符号人工智能是严格的、预先编程的,而深度学习是灵活的、不可预测的,呈现出实际机器智能的雏形。它的创造力可以被比作古希腊的米提斯概念,一种狡猾和多形态的智慧。虽然经常有偏见和问题,但DL展示了一种机器创造力,超越了人工智能作为人类规范的(机械的)模仿的人类中心想象。
{"title":"Deep learning as machine metis","authors":"Primož Krašovec","doi":"10.1007/s00146-025-02360-4","DOIUrl":"10.1007/s00146-025-02360-4","url":null,"abstract":"<div><p>This article situates current deep learning (DL) artificial intelligence (AI) within Leroi-Gourhan’s deep history of the human species’ relation to technology. According to Leroi-Gourhan, technology is both a key element of <i>anthropogenesis</i> and a source of later tensions (or disentanglement) between the human species and its external and increasingly autonomous technics. Human organic (life-oriented) intelligence at first extends itself through technical (machine-oriented) intelligence, only to be later left behind by it. We propose a concept of machine intelligence that goes beyond technical intelligence, the latter a (still) hybrid human–machine intelligence. This new, emerging machine intelligence is DL AI. DL AI developed out of the failure of symbolic AI to instantiate a key generic component of intelligence: creativity. While symbolic AI was rigid and pre-programmed, DL is flexible and unpredictable, presenting an embryonic form of actual machine intelligence. Its creativity can be likened to the ancient Greek concept of <i>metis,</i> a cunning and polymorphous form of intelligence. Although often biased and problematic, DL exhibits a machine creativity that goes beyond the anthropocentric imaginings of AI as a (mechanistic) imitation of the human norm.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5809 - 5818"},"PeriodicalIF":4.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02360-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating fairness: introducing the multidimensional AIM-FAIR scale for evaluating AI decision-making 导航公平:引入多维AIM-FAIR量表来评估人工智能决策
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1007/s00146-025-02354-2
Nico Ehrhardt, Manuela Renn, Sonja Utz

People’s concerns regarding the fairness of algorithmic decision-making, coupled with its expanding utilization across various spheres of our lives underscores the need for robust measures to assess perceived fairness in standardized survey research. Existing fairness scales often suffer from inadequate content coverage, particularly in terms of Perceived Group Discrimination, and frequently employ suboptimal measurement methods, such as single-item assessments. This paper introduces the AIM-FAIR scale, a multidimensional tool grounded in classical test theory, employing Likert-scaled answering options and a reflective measurement model. Developed through four studies (n = 1777) and validated in both English and German, the scale includes 17 items across five subscales: Perceived Consistency, Perceived Equity, Perceived Group Bias, Perceived Manipulability, and Perceived (Explanatory) Transparency. Both language versions demonstrate excellent fit indices and consistent measurement invariance across diverse backgrounds, languages, and conditions. The AIM-FAIR scale offers higher ecological validity and a more comprehensive framework for evaluating fairness in ADM, enhancing cross-cultural and cross-linguistic research on AI fairness.

人们对算法决策公平性的担忧,再加上它在我们生活各个领域的广泛应用,突显了在标准化调查研究中评估感知公平性的强有力措施的必要性。现有的公平量表往往存在内容覆盖不足的问题,特别是在感知群体歧视方面,并且经常采用次优的测量方法,例如单项评估。本文介绍了AIM-FAIR量表,这是一个基于经典测试理论的多维工具,采用李克特量表回答选项和反思性测量模型。该量表通过四项研究(n = 1777)开发,并以英语和德语进行验证,包括五个子量表中的17个项目:感知一致性、感知公平、感知群体偏见、感知可操控性和感知(解释性)透明度。两种语言版本在不同的背景、语言和条件下都表现出出色的拟合指数和一致的测量不变性。AIM-FAIR量表提供了更高的生态效度和更全面的评估ADM公平性的框架,加强了人工智能公平性的跨文化和跨语言研究。
{"title":"Navigating fairness: introducing the multidimensional AIM-FAIR scale for evaluating AI decision-making","authors":"Nico Ehrhardt,&nbsp;Manuela Renn,&nbsp;Sonja Utz","doi":"10.1007/s00146-025-02354-2","DOIUrl":"10.1007/s00146-025-02354-2","url":null,"abstract":"<div><p>People’s concerns regarding the fairness of algorithmic decision-making, coupled with its expanding utilization across various spheres of our lives underscores the need for robust measures to assess perceived fairness in standardized survey research. Existing fairness scales often suffer from inadequate content coverage, particularly in terms of <i>Perceived Group Discrimination</i>, and frequently employ suboptimal measurement methods, such as single-item assessments. This paper introduces the AIM-FAIR scale, a multidimensional tool grounded in classical test theory, employing Likert-scaled answering options and a reflective measurement model. Developed through four studies (<i>n</i> = 1777) and validated in both English and German, the scale includes 17 items across five subscales: <i>Perceived Consistency</i>, <i>Perceived Equity</i>, <i>Perceived Group Bias</i>, <i>Perceived Manipulability</i>, and <i>Perceived (Explanatory) Transparency</i>. Both language versions demonstrate excellent fit indices and consistent measurement invariance across diverse backgrounds, languages, and conditions. The AIM-FAIR scale offers higher ecological validity and a more comprehensive framework for evaluating fairness in ADM, enhancing cross-cultural and cross-linguistic research on AI fairness.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6181 - 6199"},"PeriodicalIF":4.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02354-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1