首页 > 最新文献

AI & Society最新文献

英文 中文
Reflexive ecologies of knowledge in the future of AI & Society 人工智能与社会未来的知识反思生态
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1007/s00146-026-02859-4
Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho
{"title":"Reflexive ecologies of knowledge in the future of AI & Society","authors":"Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho","doi":"10.1007/s00146-026-02859-4","DOIUrl":"10.1007/s00146-026-02859-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"1 - 3"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The machine in the manuscript: editorial dilemmas 手稿中的机器:编辑困境
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-25 DOI: 10.1007/s00146-025-02665-4
Donghee Shin, Angelika Suchanová, Jeffrey White, Liam Magee, Manh-Tung Ho, Houda Chakiri
{"title":"The machine in the manuscript: editorial dilemmas","authors":"Donghee Shin, Angelika Suchanová, Jeffrey White, Liam Magee, Manh-Tung Ho, Houda Chakiri","doi":"10.1007/s00146-025-02665-4","DOIUrl":"10.1007/s00146-025-02665-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5781 - 5786"},"PeriodicalIF":4.7,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI, society, and the shadows of our desires 人工智能,社会,以及我们欲望的影子
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-22 DOI: 10.1007/s00146-025-02484-7
Larry Stapleton
{"title":"AI, society, and the shadows of our desires","authors":"Larry Stapleton","doi":"10.1007/s00146-025-02484-7","DOIUrl":"10.1007/s00146-025-02484-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 7","pages":"5109 - 5113"},"PeriodicalIF":4.7,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145456803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Consent-GPT valid? Public attitudes to generative AI use in surgical consent. 同意- gpt有效吗?公众对在手术同意中使用生成人工智能的态度。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-09 DOI: 10.1007/s00146-025-02644-9
Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp

Healthcare systems often delegate surgical consent-seeking to members of the treating team other than the surgeon (e.g., junior doctors in the UK and Australia). Yet, little is known about public attitudes toward this practice compared to emerging AI-supported options. This first large-scale empirical study examines how laypeople evaluate the validity and liability risks of using an AI-supported surgical consent system (Consent-GPT). We randomly assigned 376 UK participants (demographically representative for age, ethnicity, and gender) to evaluate identical transcripts of surgical consent interviews framed as being conducted by either Consent-GPT, a junior doctor, or the treating surgeon. Participants broadly agreed that AI-supported consent was valid (87.6% agreement), but rated it significantly lower than consent sought solely by human clinicians (treating surgeon: 97.6% agreement; junior doctor: 96.2%). Participants expressed substantially lower satisfaction with AI-supported consent compared to human-only processes (Consent-GPT: 59.5% satisfied; treating surgeon 96.8%; junior doctor: 93.1%), despite identical consent interactions (i.e., the same informational content and display format). Regarding justification to sue the hospital following a complication, participants were slightly more inclined to support legal action in response to AI-supported consent than human-only consent. However, the strongest predictor was proper risk disclosure, not the consent-seeking agent. As AI integration in healthcare accelerates, these results highlight critical considerations for implementation strategies, suggesting that a hybrid approach to consent delegation that leverages AI's information sharing capabilities while preserving meaningful human engagement may be more acceptable to patients than an otherwise identical process with relatively less human-to-human interaction.

医疗保健系统通常将征求手术同意的工作委托给治疗团队的成员,而不是外科医生(例如,英国和澳大利亚的初级医生)。然而,与新兴的人工智能支持的选择相比,公众对这种做法的态度知之甚少。这是第一次大规模的实证研究,探讨了外行人如何评估使用人工智能支持的手术同意系统(consent - gpt)的有效性和责任风险。我们随机分配了376名英国参与者(在年龄、种族和性别上具有人口统计学代表性)来评估由同意- gpt、初级医生或治疗外科医生进行的手术同意访谈的相同转录本。参与者普遍同意人工智能支持的同意是有效的(87.6%同意),但对其的评价明显低于人类临床医生(治疗外科医生:97.6%同意;初级医生:96.2%)。尽管相同的同意交互(即相同的信息内容和显示格式),参与者对人工智能支持的同意的满意度明显低于人工流程(同意- gpt: 59.5%满意;治疗外科医生:96.8%;初级医生:93.1%)。关于并发症发生后起诉医院的理由,参与者更倾向于支持对人工智能支持的同意采取法律行动,而不是只有人类同意。然而,最有力的预测因素是适当的风险披露,而不是征求同意的代理人。随着人工智能在医疗保健中的整合加速,这些结果突出了实施策略的关键考虑因素,表明一种利用人工智能的信息共享能力,同时保持有意义的人类参与的混合同意授权方法,可能比其他相同的过程更容易被患者接受,人与人之间的互动相对较少。
{"title":"Is <i>Consent-GPT</i> valid? Public attitudes to generative AI use in surgical consent.","authors":"Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp","doi":"10.1007/s00146-025-02644-9","DOIUrl":"10.1007/s00146-025-02644-9","url":null,"abstract":"<p><p>Healthcare systems often delegate surgical consent-seeking to members of the treating team other than the surgeon (e.g., junior doctors in the UK and Australia). Yet, little is known about public attitudes toward this practice compared to emerging AI-supported options. This first large-scale empirical study examines how laypeople evaluate the validity and liability risks of using an AI-supported surgical consent system (<i>Consent-GPT</i>). We randomly assigned 376 UK participants (demographically representative for age, ethnicity, and gender) to evaluate identical transcripts of surgical consent interviews framed as being conducted by either <i>Consent-GPT</i>, a junior doctor, or the treating surgeon. Participants broadly agreed that AI-supported consent was valid (87.6% agreement), but rated it significantly lower than consent sought solely by human clinicians (treating surgeon: 97.6% agreement; junior doctor: 96.2%). Participants expressed substantially lower satisfaction with AI-supported consent compared to human-only processes (<i>Consent-GPT</i>: 59.5% satisfied; treating surgeon 96.8%; junior doctor: 93.1%), despite identical consent interactions (i.e., the same informational content and display format). Regarding justification to sue the hospital following a complication, participants were slightly more inclined to support legal action in response to AI-supported consent than human-only consent. However, the strongest predictor was proper risk disclosure, not the consent-seeking agent. As AI integration in healthcare accelerates, these results highlight critical considerations for implementation strategies, suggesting that a hybrid approach to consent delegation that leverages AI's information sharing capabilities while preserving meaningful human engagement may be more acceptable to patients than an otherwise identical process with relatively less human-to-human interaction.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":" ","pages":""},"PeriodicalIF":4.7,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body metaphors in science fiction narratives: a proposal for challenging stereotypes of robots in narrative 科幻叙事中的身体隐喻:挑战叙事中机器人刻板印象的建议
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-25 DOI: 10.1007/s00146-025-02431-6
Xiaoyang Guo, Yi Zeng

Since the latter half of the twentieth century, science fiction narratives centered on humanoid robots have continuously explored the future of human–machine symbiosis through embodied character design, providing a conceptual testing ground for real-world robotic development. This study focuses on the metaphorical mechanisms underlying the “quasi-body” of robots in such narratives, revealing how they challenge the stability of the human concept. By analyzing how various humanoid robotic figures in science fiction narratives are modeled upon the human “body” and how the robots’ “quasi-body” reciprocally reshape the concept of the “human”, this article employs a dual perspective integrating phenomenological embodiment theory and conceptual metaphor theory. The argument unfolds in three progressive stages: deconstructing the metaphorical imitation of human in robotic embodiment within science fiction narrative, critiquing the simplified functional body that overlooks the fundamental role of the body in cognition, and tracing the reverse influence of humanoid metaphor modeling on the conceptualization of the human. This study seeks to expose the intrinsic tensions embedded in bodily metaphorization within human–robot modeling. As human–robot/machine symbiosis becomes an increasingly normalized condition of existence, only by disrupting entrenched cognitive frameworks of body stereotype can we cultivate novel relational paradigms imbued with greater ethical imagination in the technological reality.

自20世纪下半叶以来,以人形机器人为中心的科幻小说叙事,通过具身的角色设计,不断探索人机共生的未来,为现实世界的机器人发展提供了概念试验场。本研究聚焦于此类叙事中机器人“准身体”背后的隐喻机制,揭示它们如何挑战人类概念的稳定性。本文采用现象学体现理论和概念隐喻理论相结合的双重视角,分析了科幻小说叙事中各种人形机器人形象如何以人类“身体”为原型,以及机器人的“准身体”如何相互重塑“人”的概念。本文分三个阶段展开论述:解构科幻小说叙事中机器人化身对人类的隐喻模仿;批判忽视了身体在认知中的基本作用的简化功能性身体;追溯类人隐喻建模对人类概念化的反向影响。本研究旨在揭示人-机器人建模中嵌入身体隐喻的内在紧张关系。随着人-机器人/机器共生日益成为一种常态化的存在状态,只有打破根深蒂固的身体刻板印象的认知框架,我们才能在技术现实中培养出充满更大伦理想象力的新型关系范式。
{"title":"Body metaphors in science fiction narratives: a proposal for challenging stereotypes of robots in narrative","authors":"Xiaoyang Guo,&nbsp;Yi Zeng","doi":"10.1007/s00146-025-02431-6","DOIUrl":"10.1007/s00146-025-02431-6","url":null,"abstract":"<div><p>Since the latter half of the twentieth century, science fiction narratives centered on humanoid robots have continuously explored the future of human–machine symbiosis through embodied character design, providing a conceptual testing ground for real-world robotic development. This study focuses on the metaphorical mechanisms underlying the “quasi-body” of robots in such narratives, revealing how they challenge the stability of the human concept. By analyzing how various humanoid robotic figures in science fiction narratives are modeled upon the human “body” and how the robots’ “quasi-body” reciprocally reshape the concept of the “human”, this article employs a dual perspective integrating phenomenological embodiment theory and conceptual metaphor theory. The argument unfolds in three progressive stages: deconstructing the metaphorical imitation of human in robotic embodiment within science fiction narrative, critiquing the simplified functional body that overlooks the fundamental role of the body in cognition, and tracing the reverse influence of humanoid metaphor modeling on the conceptualization of the human. This study seeks to expose the intrinsic tensions embedded in bodily metaphorization within human–robot modeling. As human–robot/machine symbiosis becomes an increasingly normalized condition of existence, only by disrupting entrenched cognitive frameworks of body stereotype can we cultivate novel relational paradigms imbued with greater ethical imagination in the technological reality.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"279 - 288"},"PeriodicalIF":4.7,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What it takes to control AI by design: human learning 如何通过设计控制AI:人类学习
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-24 DOI: 10.1007/s00146-025-02401-y
Dov Te’eni, Inbal Yahav, David Schwartz

Experts in government, academia, and practice are increasingly concerned about the need for human oversight in critical human–AI systems. At the same time, traditional control designs are proving inadequate to handle the complexities of new AI technologies. Incorporating insights from systems theory, we propose a robust framework that elucidates control at multiple levels and in multiple modes of operation, ensuring meaningful human control over the human–AI system. Our framework is built on continual human learning to match advances in machine learning. The human–AI system operates in two modes: stable and adaptive, which, in combination, enable the effective use of big data and the learning necessary for effective control and adaptation. Each system level and mode of operation requires a specific control-feedback loop, and all controls must be aligned for performance and values with the higher system level to provide human control over AI. Applying these ideas to a human–AI decision system for text classification in critical applications, we demonstrate how a method we call reciprocal human–machine learning can be designed to facilitate an adaptive mode and how oversight can be implemented in a stable mode. These designs yield high and consistent classification performance that is unbiased and closely aligned with human values. It ensures effective human learning, enabling humans to stay in the loop and stay in control. Our framework provides spadework for a model of control in critical AI decision systems operating in volatile environments, where humans continue to learn alongside the machine.

政府、学术界和实践领域的专家越来越关注关键的人类-人工智能系统中人类监督的必要性。与此同时,传统的控制设计被证明不足以处理新的人工智能技术的复杂性。结合系统理论的见解,我们提出了一个强大的框架,阐明了多层次和多种操作模式的控制,确保人类对人类-人工智能系统的有意义的控制。我们的框架建立在人类不断学习的基础上,以配合机器学习的进步。人-人工智能系统以稳定和自适应两种模式运行,这两种模式相结合,可以有效利用大数据,并进行有效控制和自适应所需的学习。每个系统级别和操作模式都需要一个特定的控制反馈回路,并且所有控制必须与更高系统级别的性能和值保持一致,以提供对AI的人类控制。将这些想法应用于关键应用中的文本分类的人类-人工智能决策系统,我们展示了如何设计一种我们称之为互惠人机学习的方法来促进自适应模式,以及如何在稳定模式下实施监督。这些设计产生高和一致的分类性能,是无偏见的,并与人类价值观密切相关。它确保了有效的人类学习,使人类能够保持在循环中并保持控制。我们的框架为在不稳定的环境中运行的关键人工智能决策系统的控制模型提供了基础,在这种环境中,人类继续与机器一起学习。
{"title":"What it takes to control AI by design: human learning","authors":"Dov Te’eni,&nbsp;Inbal Yahav,&nbsp;David Schwartz","doi":"10.1007/s00146-025-02401-y","DOIUrl":"10.1007/s00146-025-02401-y","url":null,"abstract":"<div><p>Experts in government, academia, and practice are increasingly concerned about the need for human oversight in critical human–AI systems. At the same time, traditional control designs are proving inadequate to handle the complexities of new AI technologies. Incorporating insights from systems theory, we propose a robust framework that elucidates control at multiple levels and in multiple modes of operation, ensuring meaningful human control over the human–AI system. Our framework is built on continual human learning to match advances in machine learning. The human–AI system operates in two modes: stable and adaptive, which, in combination, enable the effective use of big data and the learning necessary for effective control and adaptation. Each system level and mode of operation requires a specific control-feedback loop, and all controls must be aligned for performance and values with the higher system level to provide human control over AI. Applying these ideas to a human–AI decision system for text classification in critical applications, we demonstrate how a method we call reciprocal human–machine learning can be designed to facilitate an adaptive mode and how oversight can be implemented in a stable mode. These designs yield high and consistent classification performance that is unbiased and closely aligned with human values. It ensures effective human learning, enabling humans to stay in the loop and stay in control. Our framework provides spadework for a model of control in critical AI decision systems operating in volatile environments, where humans continue to learn alongside the machine.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"237 - 250"},"PeriodicalIF":4.7,"publicationDate":"2025-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02401-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The paradox of artificial intelligence (AI) and narrative-based medicine: challenges and potential for enhanced patient care 人工智能(AI)和基于叙事的医学的悖论:增强患者护理的挑战和潜力
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-09 DOI: 10.1007/s00146-025-02418-3
Nadirah Ghenimi, Romona Govender, Keymanthri Moodley

The integration of artificial intelligence (AI) into healthcare has transformed patient care through advanced diagnostics, personalized treatment plans, and predictive analytics. However, this technological evolution presents a paradox when juxtaposed with narrative-based medicine (NBM), which emphasizes the patient’s story and human experience in healthcare delivery. The integration of AI into the NBM raises questions regarding its clinical applicability, resistance from patients and physicians, emotional considerations, time constraints, and ability to balance psychosocial and biomedical care. This critical review explores the challenges and potential of combining AI with NBM, aiming to enhance patient care by leveraging the strengths of both approaches.

人工智能(AI)与医疗保健的集成通过高级诊断、个性化治疗计划和预测分析改变了患者护理。然而,当与基于叙事的医学(NBM)并列时,这种技术发展呈现出一个悖论,NBM强调患者的故事和医疗保健服务中的人类经验。将人工智能整合到NBM中引发了关于其临床适用性、患者和医生的抵制、情感考虑、时间限制以及平衡社会心理和生物医学护理能力的问题。这篇批判性的综述探讨了将人工智能与NBM相结合的挑战和潜力,旨在通过利用两种方法的优势来增强患者护理。
{"title":"The paradox of artificial intelligence (AI) and narrative-based medicine: challenges and potential for enhanced patient care","authors":"Nadirah Ghenimi,&nbsp;Romona Govender,&nbsp;Keymanthri Moodley","doi":"10.1007/s00146-025-02418-3","DOIUrl":"10.1007/s00146-025-02418-3","url":null,"abstract":"<div><p>The integration of artificial intelligence (AI) into healthcare has transformed patient care through advanced diagnostics, personalized treatment plans, and predictive analytics. However, this technological evolution presents a paradox when juxtaposed with narrative-based medicine (NBM), which emphasizes the patient’s story and human experience in healthcare delivery. The integration of AI into the NBM raises questions regarding its clinical applicability, resistance from patients and physicians, emotional considerations, time constraints, and ability to balance psychosocial and biomedical care. This critical review explores the challenges and potential of combining AI with NBM, aiming to enhance patient care by leveraging the strengths of both approaches.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"251 - 257"},"PeriodicalIF":4.7,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02418-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technosolutionism and the empathetic medical chatbot 技术解决主义和移情医疗聊天机器人
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-08 DOI: 10.1007/s00146-025-02441-4
Tamar Sharon

This article argues for the value of applying the concept of technosolutionism to empathetic medical chatbots. By directing one’s attention to the relationship between (techno)solutions and the problems they are supposed to solve, technosolutionism helps identify two important risks in this context that tend to get overlooked in the discussion on privacy, bias, and hallucination risks of (generative) AI. First, empathetic chatbots may lead to a redefinition of the concept of empathy into a communication pattern that involves key words and expressions that do not feel rushed and which can be taught to a machine. Given that empathy is a core value of healthcare, this hollowing out of the concept of empathy is concerning. Second, insofar as empathetic chatbots do not seek to facilitate or support the provision of empathetic care by human healthcare professionals but rather perform empathy themselves, they raise the risk of redefining healthcare’s empathy problem as a lack of empathy on the part of healthcare professionals. It is argued that this risks transforming the real issue underlying healthcare’s empathy problem—that healthcare professionals do not have the time and space needed to provide empathetic care (in part because of the introduction of digital health tech in the first place)—into an “orphan problem”. This in turn may create a vicious circle, whereby attention and resources are drawn away from structural solutions to healthcare’s empathy problem to technologies which are ever more successful in simulating empathy.

本文论证了将技术解决方案主义的概念应用于移情医疗聊天机器人的价值。通过将人们的注意力引导到(技术)解决方案与它们应该解决的问题之间的关系上,技术解决主义有助于识别在这种背景下的两个重要风险,这两个风险在讨论(生成式)人工智能的隐私、偏见和幻觉风险时往往被忽视。首先,有同理心的聊天机器人可能会导致对同理心概念的重新定义,使之成为一种沟通模式,其中涉及的关键词和表达方式不会让人感到匆忙,而且可以教给机器。考虑到移情是医疗保健的核心价值,这种移情概念的空心化令人担忧。其次,只要移情聊天机器人不寻求促进或支持人类医疗保健专业人员提供移情护理,而是自己进行移情,它们就会增加将医疗保健的移情问题重新定义为医疗保健专业人员缺乏移情的风险。有人认为,这可能会将医疗保健的移情问题——医疗保健专业人员没有时间和空间来提供移情护理(部分原因是首先引入了数字医疗技术)——转变为“孤儿问题”。这反过来可能会造成一个恶性循环,即注意力和资源从医疗保健同理心问题的结构性解决方案转移到更成功地模拟同理心的技术上。
{"title":"Technosolutionism and the empathetic medical chatbot","authors":"Tamar Sharon","doi":"10.1007/s00146-025-02441-4","DOIUrl":"10.1007/s00146-025-02441-4","url":null,"abstract":"<div><p>This article argues for the value of applying the concept of technosolutionism to empathetic medical chatbots. By directing one’s attention to the relationship between (techno)solutions and the problems they are supposed to solve, technosolutionism helps identify two important risks in this context that tend to get overlooked in the discussion on privacy, bias, and hallucination risks of (generative) AI. First, empathetic chatbots may lead to a redefinition of the concept of empathy into a communication pattern that involves key words and expressions that do not feel rushed and which can be taught to a machine. Given that empathy is a core value of healthcare, this hollowing out of the concept of empathy is concerning. Second, insofar as empathetic chatbots do not seek to facilitate or support the provision of empathetic care by human healthcare professionals but rather perform empathy themselves, they raise the risk of redefining healthcare’s empathy problem as a lack of empathy on the part of healthcare professionals. It is argued that this risks transforming the real issue underlying healthcare’s empathy problem—that healthcare professionals do not have the time and space needed to provide empathetic care (in part because of the introduction of digital health tech in the first place)—into an “orphan problem”. This in turn may create a vicious circle, whereby attention and resources are drawn away from structural solutions to healthcare’s empathy problem to technologies which are ever more successful in simulating empathy.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"289 - 306"},"PeriodicalIF":4.7,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02441-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When algorithms sell: rethinking consumer behavior in the AI-enhanced marketplace 算法何时畅销:重新思考人工智能增强市场中的消费者行为
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-30 DOI: 10.1007/s00146-025-02453-0
Nasser Bouchareb
{"title":"When algorithms sell: rethinking consumer behavior in the AI-enhanced marketplace","authors":"Nasser Bouchareb","doi":"10.1007/s00146-025-02453-0","DOIUrl":"10.1007/s00146-025-02453-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"469 - 470"},"PeriodicalIF":4.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond accidents and misuse: decoding the structural risk dynamics of artificial intelligence 超越事故和误用:解读人工智能的结构性风险动态
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-29 DOI: 10.1007/s00146-025-02419-2
Kyle A. Kilian

As artificial intelligence (AI) becomes increasingly embedded in the core functions of social, political, and economic life, it catalyzes structural transformations with far-reaching societal implications. This paper advances the concept of structural risk by introducing a framework grounded in complex systems research to examine how rapid AI integration can generate emergent, system-level dynamics beyond conventional, proximate threats such as system failures or malicious misuse. It argues that such risks are both influenced by and constitutive of broader sociotechnical structures. We classify structural risks into three interrelated categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops. By tracing these interactions, we show how unchecked AI development can destabilize trust, shift power asymmetries, and erode decision-making agency across scales. To anticipate and govern these dynamics, this paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight. We conclude with policy recommendations aimed at cultivating institutional resilience and adaptive governance strategies for navigating an increasingly volatile AI risk landscape.

随着人工智能(AI)越来越多地嵌入社会、政治和经济生活的核心功能,它催化了具有深远社会影响的结构变革。本文通过引入基于复杂系统研究的框架来推进结构风险的概念,以研究快速人工智能集成如何产生紧急的系统级动态,而不是传统的、接近的威胁,如系统故障或恶意滥用。它认为,这些风险既受到更广泛的社会技术结构的影响,也构成了更广泛的社会技术结构。我们将结构性风险分为三个相互关联的类别:先行的结构性原因、先行的人工智能系统原因和有害的反馈循环。通过追踪这些相互作用,我们展示了不受控制的人工智能发展如何破坏信任的稳定,改变权力不对称,并在各个尺度上侵蚀决策机构。为了预测和管理这些动态,本文提出了一个包含场景映射、模拟和探索性预见的方法议程。最后,我们提出了政策建议,旨在培养制度弹性和适应性治理策略,以应对日益动荡的人工智能风险环境。
{"title":"Beyond accidents and misuse: decoding the structural risk dynamics of artificial intelligence","authors":"Kyle A. Kilian","doi":"10.1007/s00146-025-02419-2","DOIUrl":"10.1007/s00146-025-02419-2","url":null,"abstract":"<div><p>As artificial intelligence (AI) becomes increasingly embedded in the core functions of social, political, and economic life, it catalyzes structural transformations with far-reaching societal implications. This paper advances the concept of structural risk by introducing a framework grounded in complex systems research to examine how rapid AI integration can generate emergent, system-level dynamics beyond conventional, proximate threats such as system failures or malicious misuse. It argues that such risks are both influenced by and constitutive of broader sociotechnical structures. We classify structural risks into three interrelated categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops. By tracing these interactions, we show how unchecked AI development can destabilize trust, shift power asymmetries, and erode decision-making agency across scales. To anticipate and govern these dynamics, this paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight. We conclude with policy recommendations aimed at cultivating institutional resilience and adaptive governance strategies for navigating an increasingly volatile AI risk landscape.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"23 - 42"},"PeriodicalIF":4.7,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02419-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1