Pub Date : 2026-01-13DOI: 10.1007/s00146-026-02859-4
Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho
{"title":"Reflexive ecologies of knowledge in the future of AI & Society","authors":"Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho","doi":"10.1007/s00146-026-02859-4","DOIUrl":"10.1007/s00146-026-02859-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"1 - 3"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-22DOI: 10.1007/s00146-025-02484-7
Larry Stapleton
{"title":"AI, society, and the shadows of our desires","authors":"Larry Stapleton","doi":"10.1007/s00146-025-02484-7","DOIUrl":"10.1007/s00146-025-02484-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 7","pages":"5109 - 5113"},"PeriodicalIF":4.7,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145456803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1007/s00146-025-02644-9
Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp
Healthcare systems often delegate surgical consent-seeking to members of the treating team other than the surgeon (e.g., junior doctors in the UK and Australia). Yet, little is known about public attitudes toward this practice compared to emerging AI-supported options. This first large-scale empirical study examines how laypeople evaluate the validity and liability risks of using an AI-supported surgical consent system (Consent-GPT). We randomly assigned 376 UK participants (demographically representative for age, ethnicity, and gender) to evaluate identical transcripts of surgical consent interviews framed as being conducted by either Consent-GPT, a junior doctor, or the treating surgeon. Participants broadly agreed that AI-supported consent was valid (87.6% agreement), but rated it significantly lower than consent sought solely by human clinicians (treating surgeon: 97.6% agreement; junior doctor: 96.2%). Participants expressed substantially lower satisfaction with AI-supported consent compared to human-only processes (Consent-GPT: 59.5% satisfied; treating surgeon 96.8%; junior doctor: 93.1%), despite identical consent interactions (i.e., the same informational content and display format). Regarding justification to sue the hospital following a complication, participants were slightly more inclined to support legal action in response to AI-supported consent than human-only consent. However, the strongest predictor was proper risk disclosure, not the consent-seeking agent. As AI integration in healthcare accelerates, these results highlight critical considerations for implementation strategies, suggesting that a hybrid approach to consent delegation that leverages AI's information sharing capabilities while preserving meaningful human engagement may be more acceptable to patients than an otherwise identical process with relatively less human-to-human interaction.
{"title":"Is <i>Consent-GPT</i> valid? Public attitudes to generative AI use in surgical consent.","authors":"Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp","doi":"10.1007/s00146-025-02644-9","DOIUrl":"10.1007/s00146-025-02644-9","url":null,"abstract":"<p><p>Healthcare systems often delegate surgical consent-seeking to members of the treating team other than the surgeon (e.g., junior doctors in the UK and Australia). Yet, little is known about public attitudes toward this practice compared to emerging AI-supported options. This first large-scale empirical study examines how laypeople evaluate the validity and liability risks of using an AI-supported surgical consent system (<i>Consent-GPT</i>). We randomly assigned 376 UK participants (demographically representative for age, ethnicity, and gender) to evaluate identical transcripts of surgical consent interviews framed as being conducted by either <i>Consent-GPT</i>, a junior doctor, or the treating surgeon. Participants broadly agreed that AI-supported consent was valid (87.6% agreement), but rated it significantly lower than consent sought solely by human clinicians (treating surgeon: 97.6% agreement; junior doctor: 96.2%). Participants expressed substantially lower satisfaction with AI-supported consent compared to human-only processes (<i>Consent-GPT</i>: 59.5% satisfied; treating surgeon 96.8%; junior doctor: 93.1%), despite identical consent interactions (i.e., the same informational content and display format). Regarding justification to sue the hospital following a complication, participants were slightly more inclined to support legal action in response to AI-supported consent than human-only consent. However, the strongest predictor was proper risk disclosure, not the consent-seeking agent. As AI integration in healthcare accelerates, these results highlight critical considerations for implementation strategies, suggesting that a hybrid approach to consent delegation that leverages AI's information sharing capabilities while preserving meaningful human engagement may be more acceptable to patients than an otherwise identical process with relatively less human-to-human interaction.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":" ","pages":""},"PeriodicalIF":4.7,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-25DOI: 10.1007/s00146-025-02431-6
Xiaoyang Guo, Yi Zeng
Since the latter half of the twentieth century, science fiction narratives centered on humanoid robots have continuously explored the future of human–machine symbiosis through embodied character design, providing a conceptual testing ground for real-world robotic development. This study focuses on the metaphorical mechanisms underlying the “quasi-body” of robots in such narratives, revealing how they challenge the stability of the human concept. By analyzing how various humanoid robotic figures in science fiction narratives are modeled upon the human “body” and how the robots’ “quasi-body” reciprocally reshape the concept of the “human”, this article employs a dual perspective integrating phenomenological embodiment theory and conceptual metaphor theory. The argument unfolds in three progressive stages: deconstructing the metaphorical imitation of human in robotic embodiment within science fiction narrative, critiquing the simplified functional body that overlooks the fundamental role of the body in cognition, and tracing the reverse influence of humanoid metaphor modeling on the conceptualization of the human. This study seeks to expose the intrinsic tensions embedded in bodily metaphorization within human–robot modeling. As human–robot/machine symbiosis becomes an increasingly normalized condition of existence, only by disrupting entrenched cognitive frameworks of body stereotype can we cultivate novel relational paradigms imbued with greater ethical imagination in the technological reality.
{"title":"Body metaphors in science fiction narratives: a proposal for challenging stereotypes of robots in narrative","authors":"Xiaoyang Guo, Yi Zeng","doi":"10.1007/s00146-025-02431-6","DOIUrl":"10.1007/s00146-025-02431-6","url":null,"abstract":"<div><p>Since the latter half of the twentieth century, science fiction narratives centered on humanoid robots have continuously explored the future of human–machine symbiosis through embodied character design, providing a conceptual testing ground for real-world robotic development. This study focuses on the metaphorical mechanisms underlying the “quasi-body” of robots in such narratives, revealing how they challenge the stability of the human concept. By analyzing how various humanoid robotic figures in science fiction narratives are modeled upon the human “body” and how the robots’ “quasi-body” reciprocally reshape the concept of the “human”, this article employs a dual perspective integrating phenomenological embodiment theory and conceptual metaphor theory. The argument unfolds in three progressive stages: deconstructing the metaphorical imitation of human in robotic embodiment within science fiction narrative, critiquing the simplified functional body that overlooks the fundamental role of the body in cognition, and tracing the reverse influence of humanoid metaphor modeling on the conceptualization of the human. This study seeks to expose the intrinsic tensions embedded in bodily metaphorization within human–robot modeling. As human–robot/machine symbiosis becomes an increasingly normalized condition of existence, only by disrupting entrenched cognitive frameworks of body stereotype can we cultivate novel relational paradigms imbued with greater ethical imagination in the technological reality.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"279 - 288"},"PeriodicalIF":4.7,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-24DOI: 10.1007/s00146-025-02401-y
Dov Te’eni, Inbal Yahav, David Schwartz
Experts in government, academia, and practice are increasingly concerned about the need for human oversight in critical human–AI systems. At the same time, traditional control designs are proving inadequate to handle the complexities of new AI technologies. Incorporating insights from systems theory, we propose a robust framework that elucidates control at multiple levels and in multiple modes of operation, ensuring meaningful human control over the human–AI system. Our framework is built on continual human learning to match advances in machine learning. The human–AI system operates in two modes: stable and adaptive, which, in combination, enable the effective use of big data and the learning necessary for effective control and adaptation. Each system level and mode of operation requires a specific control-feedback loop, and all controls must be aligned for performance and values with the higher system level to provide human control over AI. Applying these ideas to a human–AI decision system for text classification in critical applications, we demonstrate how a method we call reciprocal human–machine learning can be designed to facilitate an adaptive mode and how oversight can be implemented in a stable mode. These designs yield high and consistent classification performance that is unbiased and closely aligned with human values. It ensures effective human learning, enabling humans to stay in the loop and stay in control. Our framework provides spadework for a model of control in critical AI decision systems operating in volatile environments, where humans continue to learn alongside the machine.
{"title":"What it takes to control AI by design: human learning","authors":"Dov Te’eni, Inbal Yahav, David Schwartz","doi":"10.1007/s00146-025-02401-y","DOIUrl":"10.1007/s00146-025-02401-y","url":null,"abstract":"<div><p>Experts in government, academia, and practice are increasingly concerned about the need for human oversight in critical human–AI systems. At the same time, traditional control designs are proving inadequate to handle the complexities of new AI technologies. Incorporating insights from systems theory, we propose a robust framework that elucidates control at multiple levels and in multiple modes of operation, ensuring meaningful human control over the human–AI system. Our framework is built on continual human learning to match advances in machine learning. The human–AI system operates in two modes: stable and adaptive, which, in combination, enable the effective use of big data and the learning necessary for effective control and adaptation. Each system level and mode of operation requires a specific control-feedback loop, and all controls must be aligned for performance and values with the higher system level to provide human control over AI. Applying these ideas to a human–AI decision system for text classification in critical applications, we demonstrate how a method we call reciprocal human–machine learning can be designed to facilitate an adaptive mode and how oversight can be implemented in a stable mode. These designs yield high and consistent classification performance that is unbiased and closely aligned with human values. It ensures effective human learning, enabling humans to stay in the loop and stay in control. Our framework provides spadework for a model of control in critical AI decision systems operating in volatile environments, where humans continue to learn alongside the machine.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"237 - 250"},"PeriodicalIF":4.7,"publicationDate":"2025-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02401-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of artificial intelligence (AI) into healthcare has transformed patient care through advanced diagnostics, personalized treatment plans, and predictive analytics. However, this technological evolution presents a paradox when juxtaposed with narrative-based medicine (NBM), which emphasizes the patient’s story and human experience in healthcare delivery. The integration of AI into the NBM raises questions regarding its clinical applicability, resistance from patients and physicians, emotional considerations, time constraints, and ability to balance psychosocial and biomedical care. This critical review explores the challenges and potential of combining AI with NBM, aiming to enhance patient care by leveraging the strengths of both approaches.
{"title":"The paradox of artificial intelligence (AI) and narrative-based medicine: challenges and potential for enhanced patient care","authors":"Nadirah Ghenimi, Romona Govender, Keymanthri Moodley","doi":"10.1007/s00146-025-02418-3","DOIUrl":"10.1007/s00146-025-02418-3","url":null,"abstract":"<div><p>The integration of artificial intelligence (AI) into healthcare has transformed patient care through advanced diagnostics, personalized treatment plans, and predictive analytics. However, this technological evolution presents a paradox when juxtaposed with narrative-based medicine (NBM), which emphasizes the patient’s story and human experience in healthcare delivery. The integration of AI into the NBM raises questions regarding its clinical applicability, resistance from patients and physicians, emotional considerations, time constraints, and ability to balance psychosocial and biomedical care. This critical review explores the challenges and potential of combining AI with NBM, aiming to enhance patient care by leveraging the strengths of both approaches.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"251 - 257"},"PeriodicalIF":4.7,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02418-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-08DOI: 10.1007/s00146-025-02441-4
Tamar Sharon
This article argues for the value of applying the concept of technosolutionism to empathetic medical chatbots. By directing one’s attention to the relationship between (techno)solutions and the problems they are supposed to solve, technosolutionism helps identify two important risks in this context that tend to get overlooked in the discussion on privacy, bias, and hallucination risks of (generative) AI. First, empathetic chatbots may lead to a redefinition of the concept of empathy into a communication pattern that involves key words and expressions that do not feel rushed and which can be taught to a machine. Given that empathy is a core value of healthcare, this hollowing out of the concept of empathy is concerning. Second, insofar as empathetic chatbots do not seek to facilitate or support the provision of empathetic care by human healthcare professionals but rather perform empathy themselves, they raise the risk of redefining healthcare’s empathy problem as a lack of empathy on the part of healthcare professionals. It is argued that this risks transforming the real issue underlying healthcare’s empathy problem—that healthcare professionals do not have the time and space needed to provide empathetic care (in part because of the introduction of digital health tech in the first place)—into an “orphan problem”. This in turn may create a vicious circle, whereby attention and resources are drawn away from structural solutions to healthcare’s empathy problem to technologies which are ever more successful in simulating empathy.
{"title":"Technosolutionism and the empathetic medical chatbot","authors":"Tamar Sharon","doi":"10.1007/s00146-025-02441-4","DOIUrl":"10.1007/s00146-025-02441-4","url":null,"abstract":"<div><p>This article argues for the value of applying the concept of technosolutionism to empathetic medical chatbots. By directing one’s attention to the relationship between (techno)solutions and the problems they are supposed to solve, technosolutionism helps identify two important risks in this context that tend to get overlooked in the discussion on privacy, bias, and hallucination risks of (generative) AI. First, empathetic chatbots may lead to a redefinition of the concept of empathy into a communication pattern that involves key words and expressions that do not feel rushed and which can be taught to a machine. Given that empathy is a core value of healthcare, this hollowing out of the concept of empathy is concerning. Second, insofar as empathetic chatbots do not seek to facilitate or support the provision of empathetic care by human healthcare professionals but rather perform empathy themselves, they raise the risk of redefining healthcare’s empathy problem as a lack of empathy on the part of healthcare professionals. It is argued that this risks transforming the real issue underlying healthcare’s empathy problem—that healthcare professionals do not have the time and space needed to provide empathetic care (in part because of the introduction of digital health tech in the first place)—into an “orphan problem”. This in turn may create a vicious circle, whereby attention and resources are drawn away from structural solutions to healthcare’s empathy problem to technologies which are ever more successful in simulating empathy.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"289 - 306"},"PeriodicalIF":4.7,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02441-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-29DOI: 10.1007/s00146-025-02419-2
Kyle A. Kilian
As artificial intelligence (AI) becomes increasingly embedded in the core functions of social, political, and economic life, it catalyzes structural transformations with far-reaching societal implications. This paper advances the concept of structural risk by introducing a framework grounded in complex systems research to examine how rapid AI integration can generate emergent, system-level dynamics beyond conventional, proximate threats such as system failures or malicious misuse. It argues that such risks are both influenced by and constitutive of broader sociotechnical structures. We classify structural risks into three interrelated categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops. By tracing these interactions, we show how unchecked AI development can destabilize trust, shift power asymmetries, and erode decision-making agency across scales. To anticipate and govern these dynamics, this paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight. We conclude with policy recommendations aimed at cultivating institutional resilience and adaptive governance strategies for navigating an increasingly volatile AI risk landscape.
{"title":"Beyond accidents and misuse: decoding the structural risk dynamics of artificial intelligence","authors":"Kyle A. Kilian","doi":"10.1007/s00146-025-02419-2","DOIUrl":"10.1007/s00146-025-02419-2","url":null,"abstract":"<div><p>As artificial intelligence (AI) becomes increasingly embedded in the core functions of social, political, and economic life, it catalyzes structural transformations with far-reaching societal implications. This paper advances the concept of structural risk by introducing a framework grounded in complex systems research to examine how rapid AI integration can generate emergent, system-level dynamics beyond conventional, proximate threats such as system failures or malicious misuse. It argues that such risks are both influenced by and constitutive of broader sociotechnical structures. We classify structural risks into three interrelated categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops. By tracing these interactions, we show how unchecked AI development can destabilize trust, shift power asymmetries, and erode decision-making agency across scales. To anticipate and govern these dynamics, this paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight. We conclude with policy recommendations aimed at cultivating institutional resilience and adaptive governance strategies for navigating an increasingly volatile AI risk landscape.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"23 - 42"},"PeriodicalIF":4.7,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02419-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}