首页 > 最新文献

AI & Society最新文献

英文 中文
Trusting the (un)trustworthy? A new conceptual approach to the ethics of social care robots 相信(不)值得信任的人?社会关怀机器人伦理的新概念方法
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-17 DOI: 10.1007/s00146-025-02274-1
Joan Llorca Albareda, Belén Liedo, María Victoria Martínez-López

Social care robots (SCR) have come to the forefront of the ethical debate. While the possibility of robots helping us tackle the global care crisis is promising for some, others have raised concerns about the adequacy of AI-driven technologies for the ethically complex world of care. The robots do not seem able to provide the comprehensive care many people demand and deserve, at least they do not seem able to engage in humane, emotion-laden and significant care relationships. In this article, we will propose to focus the debate on a particularly relevant aspect of care: trust. We will argue that, to answer the question of whether SCR are ethically acceptable, we must first address another question, namely, whether they are trustworthy. To this end, we propose a three-level model of trust analysis: rational, motivational and personal or intimate. We will argue that some relevant forms of caregiving (especially care for highly dependent persons) require a very personal or intimate type of care that distinguishes it from other contexts. Nevertheless, this is not the only type of trust happening in care spaces. We will adduce that, while we cannot have intimate or highly personal relationships with robots, they are trustworthy at the rational and thin motivational level. The fact that robots cannot engage in some (personal) aspects of care does not mean that they cannot be useful in care contexts. We will defend that critical approaches to trusting SCR have been sustained by two misconceptions and propose a new model for analyzing their moral acceptability: sociotechnical trust in teams of humans and robots.

社会关怀机器人(SCR)已经成为伦理辩论的前沿。虽然对一些人来说,机器人帮助我们解决全球护理危机的可能性是有希望的,但也有人对人工智能驱动的技术是否足以应对道德复杂的护理世界表示担忧。机器人似乎无法提供许多人需要和应得的全面护理,至少它们似乎无法参与人道的、充满情感的和重要的护理关系。在这篇文章中,我们将建议将辩论集中在护理的一个特别相关的方面:信任。我们认为,要回答SCR是否在道德上可接受的问题,我们必须首先解决另一个问题,即它们是否值得信赖。为此,我们提出了一个三个层次的信任分析模型:理性、动机和个人或亲密。我们将论证,一些相关形式的照料(尤其是对高度依赖的人的照料)需要一种非常私人或亲密的照料,这将其与其他环境区分开来。然而,这并不是在护理场所发生的唯一一种信任。我们将引用,虽然我们不能与机器人建立亲密或高度私人的关系,但它们在理性和薄动机层面上是值得信赖的。机器人不能参与某些(个人)护理的事实并不意味着它们不能在护理环境中发挥作用。我们将捍卫信任SCR的关键方法是由两个误解维持的,并提出一个新的模型来分析它们的道德可接受性:人类和机器人团队中的社会技术信任。
{"title":"Trusting the (un)trustworthy? A new conceptual approach to the ethics of social care robots","authors":"Joan Llorca Albareda,&nbsp;Belén Liedo,&nbsp;María Victoria Martínez-López","doi":"10.1007/s00146-025-02274-1","DOIUrl":"10.1007/s00146-025-02274-1","url":null,"abstract":"<div><p>Social care robots (SCR) have come to the forefront of the ethical debate. While the possibility of robots helping us tackle the global care crisis is promising for some, others have raised concerns about the adequacy of AI-driven technologies for the ethically complex world of care. The robots do not seem able to provide the comprehensive care many people demand and deserve, at least they do not seem able to engage in humane, emotion-laden and significant care relationships. In this article, we will propose to focus the debate on a particularly relevant aspect of care: trust. We will argue that, to answer the question of whether SCR are ethically acceptable, we must first address another question, namely, whether they are trustworthy. To this end, we propose a three-level model of trust analysis: rational, motivational and personal or intimate. We will argue that some relevant forms of caregiving (especially care for highly dependent persons) require a very personal or intimate type of care that distinguishes it from other contexts. Nevertheless, this is not the only type of trust happening in care spaces. We will adduce that, while we cannot have intimate or highly personal relationships with robots, they are trustworthy at the rational and thin motivational level. The fact that robots cannot engage in some (personal) aspects of care does not mean that they cannot be useful in care contexts. We will defend that critical approaches to trusting SCR have been sustained by two misconceptions and propose a new model for analyzing their moral acceptability: sociotechnical trust in teams of humans and robots.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5903 - 5918"},"PeriodicalIF":4.7,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02274-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond symbol processing: the embodied limits of LLMs and the gap between AI and human cognition 超越符号处理:法学硕士的具体限制和人工智能与人类认知之间的差距
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-17 DOI: 10.1007/s00146-025-02382-y
Rasmus Gahrn-Andersen
{"title":"Beyond symbol processing: the embodied limits of LLMs and the gap between AI and human cognition","authors":"Rasmus Gahrn-Andersen","doi":"10.1007/s00146-025-02382-y","DOIUrl":"10.1007/s00146-025-02382-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 5","pages":"3105 - 3107"},"PeriodicalIF":4.7,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145143910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication experiment with moral-based chatbot: toward solution of divisions in democracy 基于道德的聊天机器人交流实验:解决民主中的分歧
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-09 DOI: 10.1007/s00146-025-02377-9
Kazuhisa Miwa, Mayu Yamakawa, Akira Nakahara, Ayano Shimada

This manuscript presents the design and implementation of a chatbot integrated with moral foundations, to explore its influence on addressing divisions in democratic societies. The study uses generative AI to simulate conversations that embody specific moral viewpoints, categorized into Individualizing and Binding foundations from Moral Foundations Theory. These chatbots engage participants in discussions on an ideologically sensitive topic, nuclear abolition, to observe the impact on communication dynamics and opinion formation. The experiment, conducted with participants in Japan, shows that the chatbots effectively conveyed their designed moral foundations and influenced how participants perceived and interacted with them. Only a limited number of participants were able to identify the partner as an AI. The Individualizing chatbot, which emphasizes individual rights and welfare, were associated with more positive interpersonal impressions than the Binding chatbot, which emphasizes group cohesion and social order. Furthermore, the study shows that discussions with these chatbots can significantly change participants’ opinions on the issues under discussion, demonstrating the potential for and cautions against AI-driven tools in ideological intervention and moral education. The findings argue for the careful development and use of AI in addressing societal divides, highlighting both the potential benefits and ethical challenges.

本文介绍了一个与道德基础相结合的聊天机器人的设计和实现,以探索其对解决民主社会分歧的影响。该研究使用生成式人工智能来模拟体现特定道德观点的对话,这些观点被分为道德基础理论中的个性化基础和约束基础。这些聊天机器人让参与者参与一个意识形态敏感的话题——废除核武器——的讨论,以观察对交流动态和意见形成的影响。这项在日本进行的实验表明,聊天机器人有效地传达了它们设计的道德基础,并影响了参与者对它们的看法和互动方式。只有有限数量的参与者能够识别出合作伙伴是人工智能。与强调群体凝聚力和社会秩序的绑定型聊天机器人相比,强调个人权利和福利的个性化聊天机器人给人的人际印象更为积极。此外,研究表明,与这些聊天机器人的讨论可以显著改变参与者对所讨论问题的看法,这表明了人工智能驱动工具在意识形态干预和道德教育方面的潜力和警告。研究结果表明,在解决社会分歧时,需要谨慎地开发和使用人工智能,强调了潜在的好处和道德挑战。
{"title":"Communication experiment with moral-based chatbot: toward solution of divisions in democracy","authors":"Kazuhisa Miwa,&nbsp;Mayu Yamakawa,&nbsp;Akira Nakahara,&nbsp;Ayano Shimada","doi":"10.1007/s00146-025-02377-9","DOIUrl":"10.1007/s00146-025-02377-9","url":null,"abstract":"<div><p>This manuscript presents the design and implementation of a chatbot integrated with moral foundations, to explore its influence on addressing divisions in democratic societies. The study uses generative AI to simulate conversations that embody specific moral viewpoints, categorized into Individualizing and Binding foundations from Moral Foundations Theory. These chatbots engage participants in discussions on an ideologically sensitive topic, nuclear abolition, to observe the impact on communication dynamics and opinion formation. The experiment, conducted with participants in Japan, shows that the chatbots effectively conveyed their designed moral foundations and influenced how participants perceived and interacted with them. Only a limited number of participants were able to identify the partner as an AI. The Individualizing chatbot, which emphasizes individual rights and welfare, were associated with more positive interpersonal impressions than the Binding chatbot, which emphasizes group cohesion and social order. Furthermore, the study shows that discussions with these chatbots can significantly change participants’ opinions on the issues under discussion, demonstrating the potential for and cautions against AI-driven tools in ideological intervention and moral education. The findings argue for the careful development and use of AI in addressing societal divides, highlighting both the potential benefits and ethical challenges.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5885 - 5902"},"PeriodicalIF":4.7,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monetization could corrupt algorithmic explanations 货币化可能会破坏算法解释
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-09 DOI: 10.1007/s00146-025-02352-4
Travis Greene, Sofie Goethals, David Martens, Galit Shmueli

Explainable artificial intelligence (XAI) aims to provide insights into the logic of automated decisions with the goal of promoting fairer, more transparent, and more trustworthy automated decision-making. Despite mounting regulatory pressure, changing consumer expectations, and a growing stream of XAI-related research, few consumer-facing applications of XAI exist. In anticipation of future XAI-enabled products and services, we use ethical foresight analysis to investigate the possible consequences of monetizing explanations. By developing a conceptual artifact we call an explanation platform, we analyze what could happen when digital advertising is fused with XAI. We explore the platform’s business and design logic, examine its potential social and ethical impact, and describe several plausible explanation manipulation scenarios and strategies. We find that while XAI monetization could incentivize industry adoption of XAI technology and expand algorithmic recourse across society, it could also lead to corrupted forms of explanations optimized for profit-driven objectives. Overall, our foresight analysis makes the case for the economic and technological feasibility of monetized XAI, but raises concerns about its desirability in liberal democratic societies.

可解释人工智能(XAI)旨在提供对自动化决策逻辑的洞察,目标是促进更公平、更透明、更可信的自动化决策。尽管不断增加的监管压力、不断变化的消费者期望以及不断增长的与XAI相关的研究流,面向消费者的XAI应用仍然很少。为了预测未来支持ai的产品和服务,我们使用道德预见分析来调查货币化解释的可能后果。通过开发一个我们称之为解释平台的概念工件,我们分析了当数字广告与XAI融合时会发生什么。我们探讨了该平台的业务和设计逻辑,研究了其潜在的社会和伦理影响,并描述了几个合理的解释操纵场景和策略。我们发现,虽然XAI货币化可以激励行业采用XAI技术,并扩大整个社会的算法追索权,但它也可能导致为利润驱动目标而优化的解释形式的腐败。总的来说,我们的前瞻性分析证明了货币化XAI在经济和技术上的可行性,但也提出了对其在自由民主社会中的可取性的担忧。
{"title":"Monetization could corrupt algorithmic explanations","authors":"Travis Greene,&nbsp;Sofie Goethals,&nbsp;David Martens,&nbsp;Galit Shmueli","doi":"10.1007/s00146-025-02352-4","DOIUrl":"10.1007/s00146-025-02352-4","url":null,"abstract":"<div><p>Explainable artificial intelligence (XAI) aims to provide insights into the logic of automated decisions with the goal of promoting fairer, more transparent, and more trustworthy automated decision-making. Despite mounting regulatory pressure, changing consumer expectations, and a growing stream of XAI-related research, few consumer-facing applications of XAI exist. In anticipation of future XAI-enabled products and services, we use ethical foresight analysis to investigate the possible consequences of monetizing explanations. By developing a conceptual artifact we call an <i>explanation platform</i>, we analyze what could happen when digital advertising is fused with XAI. We explore the platform’s business and design logic, examine its potential social and ethical impact, and describe several plausible explanation manipulation scenarios and strategies. We find that while XAI monetization could incentivize industry adoption of XAI technology and expand algorithmic recourse across society, it could also lead to corrupted forms of explanations optimized for profit-driven objectives. Overall, our foresight analysis makes the case for the economic and technological feasibility of monetized XAI, but raises concerns about its desirability in liberal democratic societies.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6291 - 6308"},"PeriodicalIF":4.7,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02352-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Limits of Machine Learning Models of Misinformation 错误信息的机器学习模型的局限性
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-09 DOI: 10.1007/s00146-025-02324-8
Adrian K. Yee

Judgments of misinformation are made relative to the informational preferences of the communities making them. However, informational standards change over time, inducing distribution shifts that threaten the adequacy of machine learning models of misinformation. After articulating five kinds of distribution shifts, three solutions for enhancing success are discussed: larger static training sets, social engineering, and dynamic sampling. I argue that given the idiosyncratic ontology of misinformation, the first option is inadequate, the second is unethical, and thus the third is superior. However, I conclude that the prospects for machine learning models of misinformation are far weaker than most have presupposed, given that both epistemic and non-epistemic values are difficult to operationalize dynamically in machine code, rendering them surprisingly at most a species of recommender systems rather than literal truth detectors.

对错误信息的判断是相对于制造错误信息的群体的信息偏好做出的。然而,信息标准随着时间的推移而变化,导致分布变化,威胁到错误信息机器学习模型的充分性。在阐述了五种分布转移之后,讨论了三种提高成功率的解决方案:更大的静态训练集、社会工程和动态抽样。我认为,鉴于错误信息的特殊本体论,第一种选择是不充分的,第二种是不道德的,因此第三种选择更优越。然而,我得出的结论是,错误信息的机器学习模型的前景远比大多数人预想的要弱,因为认知和非认知值都很难在机器代码中动态操作,令人惊讶的是,它们最多只能是一种推荐系统,而不是文字的真相检测器。
{"title":"The Limits of Machine Learning Models of Misinformation","authors":"Adrian K. Yee","doi":"10.1007/s00146-025-02324-8","DOIUrl":"10.1007/s00146-025-02324-8","url":null,"abstract":"<div><p>Judgments of misinformation are made relative to the informational preferences of the communities making them. However, informational standards change over time, inducing distribution shifts that threaten the adequacy of machine learning models of misinformation. After articulating five kinds of distribution shifts, three solutions for enhancing success are discussed: larger static training sets, social engineering, and dynamic sampling. I argue that given the idiosyncratic ontology of misinformation, the first option is inadequate, the second is unethical, and thus the third is superior. However, I conclude that the prospects for machine learning models of misinformation are far weaker than most have presupposed, given that both epistemic and non-epistemic values are difficult to operationalize dynamically in machine code, rendering them surprisingly at most a species of recommender systems rather than literal truth detectors.\u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5871 - 5884"},"PeriodicalIF":4.7,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02324-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A polycrisis threat model for AI 人工智能的多危机威胁模型
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-07 DOI: 10.1007/s00146-025-02371-1
Adam Bales

A catastrophic AI threat model is a rigorous exploration of some particular mechanisms by which AI could potentially lead to catastrophic outcomes. In this article, I explore a polycrisis threat model. According to this model, AI will lead to a series of harms like disinformation and increased concentration of wealth and power. Interactions between these different harms will make things worse than they would have been had each harm operated in isolation. And the interacting harms will ultimately cause or constitute a catastrophe. My aim in this paper is not to defend the inevitability of such a polycrisis occurring. Instead, I aspire merely to establish that polycrisis-driven catastrophe is sufficiently plausible that it calls for further exploration. In doing so, I hope to emphasise that alongside worries about AI takeover, those concerned about catastrophic risk from AI should also take seriously worries about extreme power concentration and systemic disempowerment of humanity.

灾难性人工智能威胁模型是对人工智能可能导致灾难性后果的某些特定机制的严格探索。在本文中,我将探讨一个多危机威胁模型。根据这一模型,人工智能将导致一系列危害,如虚假信息和财富和权力的集中。这些不同伤害之间的相互作用会使事情变得比每一种伤害单独运作时更糟。而这些相互作用的危害最终将导致或构成一场灾难。我在本文中的目的不是为这种多重危机发生的必然性辩护。相反,我只想证明,多重危机引发的灾难是足够可信的,因此需要进一步的探索。在这样做的过程中,我希望强调,除了对人工智能接管的担忧之外,那些担心人工智能带来灾难性风险的人也应该认真对待对极端权力集中和人类系统性权力剥夺的担忧。
{"title":"A polycrisis threat model for AI","authors":"Adam Bales","doi":"10.1007/s00146-025-02371-1","DOIUrl":"10.1007/s00146-025-02371-1","url":null,"abstract":"<div><p>A catastrophic AI threat model is a rigorous exploration of some particular mechanisms by which AI could potentially lead to catastrophic outcomes. In this article, I explore a polycrisis threat model. According to this model, AI will lead to a series of harms like disinformation and increased concentration of wealth and power. Interactions between these different harms will make things worse than they would have been had each harm operated in isolation. And the interacting harms will ultimately cause or constitute a catastrophe. My aim in this paper is <i>not</i> to defend the inevitability of such a polycrisis occurring. Instead, I aspire merely to establish that polycrisis-driven catastrophe is sufficiently plausible that it calls for further exploration. In doing so, I hope to emphasise that alongside worries about AI takeover, those concerned about catastrophic risk from AI should also take seriously worries about extreme power concentration and systemic disempowerment of humanity.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6277 - 6289"},"PeriodicalIF":4.7,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02371-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In generative artificial intelligence we trust: unpacking determinants and outcomes for cognitive trust 在生成式人工智能中,我们信任:揭示认知信任的决定因素和结果
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-06 DOI: 10.1007/s00146-025-02378-8
Minh-Tay Huynh, Thomas Aichner

Amid the pervasive integration of AI technologies across societal and industrial domains, understanding users’ trust in these systems becomes increasingly crucial. This study addresses the growing need to understand users’ trust in Generative Artificial Intelligence (GenAI) and explores the societal implications of this type of trust. Based on the socio-technical systems theory, this work employs the FAT (Fairness, Accountability, Transparency) framework and humanness factors of AI, anthropomorphism, social presence, and emotions, as antecedents of users’ human-like trust, which is proposed to influence users’ attitudes, perceived performance, and behavioral intentions. Structural equation modeling analysis (N = 244) reveals that fairness significantly enhances trust, while accountability and transparency do not. Social presence and emotions positively impact trust, whereas anthropomorphism shows no significant effect. Furthermore, trust shapes users’ attitudes, perceived performance, and behavioral intentions toward GenAI systems. This study contributes to the AI adoption and user trust literature by illuminating the main antecedents of human-like trust and showing its impact on user acceptance from a social-technical perspective. Beyond the academic contribution, this research highlights the broader societal relevance of user trust in GenAI, particularly regarding public concerns over black box issues and humanness features of GenAI systems.

随着人工智能技术在社会和工业领域的广泛整合,了解用户对这些系统的信任变得越来越重要。本研究解决了理解用户对生成式人工智能(GenAI)信任的日益增长的需求,并探讨了这种信任的社会影响。基于社会技术系统理论,本研究采用了FAT (Fairness, Accountability, Transparency,公平性,问责性)框架和人工智能、拟人化、社会存在和情感等人性化因素作为用户类人信任的前因,从而影响用户的态度、感知绩效和行为意图。结构方程模型分析(N = 244)表明,公平显著增强信任,而问责制和透明度则没有。社交在场和情绪对信任有正向影响,拟人化对信任无显著影响。此外,信任塑造了用户对GenAI系统的态度、感知性能和行为意图。本研究通过阐明类人信任的主要前提,并从社会技术角度展示其对用户接受的影响,为人工智能采用和用户信任文献做出了贡献。除了学术贡献之外,本研究还强调了GenAI中用户信任的更广泛的社会相关性,特别是关于公众对GenAI系统的黑箱问题和人性化特征的关注。
{"title":"In generative artificial intelligence we trust: unpacking determinants and outcomes for cognitive trust","authors":"Minh-Tay Huynh,&nbsp;Thomas Aichner","doi":"10.1007/s00146-025-02378-8","DOIUrl":"10.1007/s00146-025-02378-8","url":null,"abstract":"<div><p>Amid the pervasive integration of AI technologies across societal and industrial domains, understanding users’ trust in these systems becomes increasingly crucial. This study addresses the growing need to understand users’ trust in Generative Artificial Intelligence (GenAI) and explores the societal implications of this type of trust. Based on the socio-technical systems theory, this work employs the FAT (Fairness, Accountability, Transparency) framework and humanness factors of AI, anthropomorphism, social presence, and emotions, as antecedents of users’ human-like trust, which is proposed to influence users’ attitudes, perceived performance, and behavioral intentions. Structural equation modeling analysis (<i>N</i> = 244) reveals that fairness significantly enhances trust, while accountability and transparency do not. Social presence and emotions positively impact trust, whereas anthropomorphism shows no significant effect. Furthermore, trust shapes users’ attitudes, perceived performance, and behavioral intentions toward GenAI systems. This study contributes to the AI adoption and user trust literature by illuminating the main antecedents of human-like trust and showing its impact on user acceptance from a social-technical perspective. Beyond the academic contribution, this research highlights the broader societal relevance of user trust in GenAI, particularly regarding public concerns over black box issues and humanness features of GenAI systems.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5849 - 5869"},"PeriodicalIF":4.7,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02378-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What factors predict user acceptance of ChatGPT for mental and physical healthcare: an extended technology acceptance model framework 预测用户接受ChatGPT心理和身体健康的因素:一个扩展的技术接受模型框架
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-03 DOI: 10.1007/s00146-025-02334-6
Sage Kelly, Sherrie-Anne Kaye, Katherine M. White, Oscar Oviedo-Trespalacios

The rise of ChatGPT has emphasized the need for an improved conceptual understanding of users’ agency when interacting with artificial intelligence (AI) systems for healthcare. Australian ChatGPT users (N = 216) completed a repeated measures online survey. Hierarchical regression analyses assessed the influence of demographic factors (age and gender), Technology Acceptance Model constructs (perceived usefulness and perceived ease of use), and extended variables (trust, privacy concerns) on users' behavioral intentions to use ChatGPT for physical and mental healthcare. The proposed model was partially supported: the findings emphasized the need to establish user trust in ChatGPT and its perceived usefulness in both areas of healthcare. Privacy concerns were a significant predictor of intentions to use ChatGPT for mental healthcare with perceived ease of use predicting intentions to use ChatGPT for physical healthcare. The findings indicate predictors of uses of AI cannot be generalized across healthcare types and unique drivers should be considered.

ChatGPT的兴起强调了在与医疗保健人工智能(AI)系统交互时,需要改进对用户代理的概念理解。澳大利亚ChatGPT用户(N = 216)完成了一项重复测量的在线调查。层次回归分析评估了人口统计学因素(年龄和性别)、技术接受模型结构(感知有用性和感知易用性)和扩展变量(信任、隐私问题)对用户使用ChatGPT进行身心健康的行为意图的影响。提出的模型得到了部分支持:研究结果强调需要建立用户对ChatGPT的信任,以及它在医疗保健两个领域的可用性。隐私问题是使用ChatGPT进行心理保健的意向的重要预测因素,而感知到的易用性可以预测使用ChatGPT进行身体保健的意向。研究结果表明,人工智能使用的预测因素不能推广到所有医疗保健类型,应考虑独特的驱动因素。
{"title":"What factors predict user acceptance of ChatGPT for mental and physical healthcare: an extended technology acceptance model framework","authors":"Sage Kelly,&nbsp;Sherrie-Anne Kaye,&nbsp;Katherine M. White,&nbsp;Oscar Oviedo-Trespalacios","doi":"10.1007/s00146-025-02334-6","DOIUrl":"10.1007/s00146-025-02334-6","url":null,"abstract":"<div><p>The rise of ChatGPT has emphasized the need for an improved conceptual understanding of users’ agency when interacting with artificial intelligence (AI) systems for healthcare. Australian ChatGPT users (<i>N</i> = 216) completed a repeated measures online survey. Hierarchical regression analyses assessed the influence of demographic factors (age and gender), Technology Acceptance Model constructs (perceived usefulness and perceived ease of use), and extended variables (trust, privacy concerns) on users' behavioral intentions to use ChatGPT for physical and mental healthcare. The proposed model was partially supported: the findings emphasized the need to establish user trust in ChatGPT and its perceived usefulness in both areas of healthcare. Privacy concerns were a significant predictor of intentions to use ChatGPT for mental healthcare with perceived ease of use predicting intentions to use ChatGPT for physical healthcare. The findings indicate predictors of uses of AI cannot be generalized across healthcare types and unique drivers should be considered.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6257 - 6275"},"PeriodicalIF":4.7,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02334-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction: When data turns into archives: making digital records more accessible with AI 导言:当数据变成档案:用人工智能让数字记录更容易获取
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-30 DOI: 10.1007/s00146-025-02374-y
Lise Jaillant, Lingjia Zhao
{"title":"Introduction: When data turns into archives: making digital records more accessible with AI","authors":"Lise Jaillant,&nbsp;Lingjia Zhao","doi":"10.1007/s00146-025-02374-y","DOIUrl":"10.1007/s00146-025-02374-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5787 - 5791"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A methodology for ethical decision-making in automated vehicles 自动驾驶车辆的道德决策方法
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-30 DOI: 10.1007/s00146-025-02370-2
Chloe Gros, Peter Werkhoven, Leon Kester, Marieke Martens

Despite significant advancements in AI and automated driving, a robust ethical framework for AV decision-making remains undeveloped. Such a framework requires clearly defined moral attributes to guide AVs in evaluating complex and ethically sensitive scenarios. Existing frameworks often rely on a single normative ethical theory, limiting their ability to address the nuanced nature of human decision-making and leading to conflicting outcomes. Augmented Utilitarianism (AU) offers a promising alternative by integrating elements of virtue ethics, deontology, and consequentialism into a non-normative framework. Grounded in moral psychology and neuroscience, AU employs mathematical ethical goal functions to capture societally aligned attributes. In this study, we propose and evaluate a method to elicit these attributes for AV decision-making. One hundred participants were presented with traffic scenarios, including critical and non-critical situations, and tasked with evaluating the relevance of an initial set of 11 attributes (e.g., physical harm, psychological harm, and moral responsibility) while suggesting additional relevant attributes. Results identified two new attributes—environmental harm and energy efficiency—and revealed that four attributes (physical harm, psychological harm, legality of the AV, and self-preservation) varied significantly between critical and non-critical scenarios. These findings suggest that the weight of attributes in ethical goal functions may need to adapt to situational criticality. The method was validated based on key evaluation criteria: it demonstrated sensitivity by producing varying relevance scores for attributes, was deemed relevant by participants for eliciting AV decision-making attributes, and allowed for the identification of additional attributes, enhancing the robustness of the framework. This work contributes to the development of a dynamic and context-sensitive ethical framework for AV decision-making.

尽管人工智能和自动驾驶取得了重大进展,但无人驾驶决策的健全道德框架仍未形成。这样的框架需要明确定义道德属性,以指导自动驾驶汽车评估复杂和道德敏感的场景。现有的框架往往依赖于单一的规范伦理理论,限制了它们解决人类决策微妙本质的能力,并导致相互矛盾的结果。增强功利主义(AU)通过将美德伦理、义务论和结果主义的要素整合到非规范性框架中,提供了一个有希望的选择。以道德心理学和神经科学为基础,AU采用数学伦理目标函数来捕捉社会一致的属性。在本研究中,我们提出并评估了一种提取这些属性用于AV决策的方法。100名参与者被提供了交通场景,包括危急和非危急情况,并被要求评估11个初始属性(如身体伤害、心理伤害和道德责任)的相关性,同时提出其他相关属性。结果确定了两个新属性——环境危害和能源效率,并揭示了四个属性(物理危害、心理危害、自动驾驶的合法性和自我保护)在关键和非关键场景之间存在显著差异。这些发现表明,伦理目标函数中属性的权重可能需要适应情境临界性。该方法基于关键评估标准进行了验证:它通过产生不同属性的相关性分数来证明敏感性,被参与者认为与AV决策属性相关,并允许识别其他属性,增强了框架的鲁棒性。这项工作有助于AV决策的动态和上下文敏感的伦理框架的发展。
{"title":"A methodology for ethical decision-making in automated vehicles","authors":"Chloe Gros,&nbsp;Peter Werkhoven,&nbsp;Leon Kester,&nbsp;Marieke Martens","doi":"10.1007/s00146-025-02370-2","DOIUrl":"10.1007/s00146-025-02370-2","url":null,"abstract":"<div><p>Despite significant advancements in AI and automated driving, a robust ethical framework for AV decision-making remains undeveloped. Such a framework requires clearly defined moral attributes to guide AVs in evaluating complex and ethically sensitive scenarios. Existing frameworks often rely on a single normative ethical theory, limiting their ability to address the nuanced nature of human decision-making and leading to conflicting outcomes. Augmented Utilitarianism (AU) offers a promising alternative by integrating elements of virtue ethics, deontology, and consequentialism into a non-normative framework. Grounded in moral psychology and neuroscience, AU employs mathematical ethical goal functions to capture societally aligned attributes. In this study, we propose and evaluate a method to elicit these attributes for AV decision-making. One hundred participants were presented with traffic scenarios, including critical and non-critical situations, and tasked with evaluating the relevance of an initial set of 11 attributes (e.g., physical harm, psychological harm, and moral responsibility) while suggesting additional relevant attributes. Results identified two new attributes—environmental harm and energy efficiency—and revealed that four attributes (physical harm, psychological harm, legality of the AV, and self-preservation) varied significantly between critical and non-critical scenarios. These findings suggest that the weight of attributes in ethical goal functions may need to adapt to situational criticality. The method was validated based on key evaluation criteria: it demonstrated sensitivity by producing varying relevance scores for attributes, was deemed relevant by participants for eliciting AV decision-making attributes, and allowed for the identification of additional attributes, enhancing the robustness of the framework. This work contributes to the development of a dynamic and context-sensitive ethical framework for AV decision-making.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6245 - 6256"},"PeriodicalIF":4.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02370-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1