首页 > 最新文献

AI & Society最新文献

英文 中文
Exploring automation bias in human–AI collaboration: a review and implications for explainable AI 探索人类与人工智能协作中的自动化偏见:对可解释人工智能的回顾和影响
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-03 DOI: 10.1007/s00146-025-02422-7
Giuseppe Romeo, Daniela Conti

As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public administration, automation bias (AB)—the tendency to over-rely on automated recommendations—has emerged as a critical challenge in human–AI collaboration. While previous reviews have examined AB in traditional computer-assisted decision-making, research on its implications in modern AI-driven work environments remains limited. To address this gap, this research systematically investigates how AB manifests in these settings and the cognitive mechanisms that influence it. Following PRISMA 2020 guidelines, we reviewed 35 peer-reviewed studies from SCOPUS, ScienceDirect, PubMed, and Google Scholar. The included literature, published between January 2015 and April 2025, spans fields such as cognitive psychology, human factors engineering, human–computer interaction, and neuroscience, providing an interdisciplinary foundation for our analysis. Traditional perspectives attribute AB to over-trust in automation or attentional constraints, resulting in users perceiving AI-generated outputs as reliable. However, our review presents a more nuanced view. While confirming some prior findings, it also sheds light on additional interacting factors such as, AI literacy, level of professional expertise, cognitive profile, developmental trust dynamics, task verification demands, and explanation complexity. Notably, although Explainable AI (XAI) and transparency mechanisms are designed to mitigate AB, overly technical, cognitively demanding, or even simplistic explanations may inadvertently reinforce misplaced trust, especially among less experienced professionals with low AI literacy. Taken together, these findings suggest that although explanations may increase perceived system acceptability, they are often insufficient to improve decision accuracy or mitigate AB. Instead, user engagement emerges as the most feasible and impactful point of intervention. As increased verification effort has been shown to reduce complacency toward AI mis-recommendations, we propose explanation design strategies that actively promote critical engagement and independent verification. These conclusions offer both theoretical and practical contributions to bias-aware AI development, underscoring that explanation usability is best supported by features such as understandability and adaptiveness.

随着人工智能(AI)越来越多地嵌入医疗保健、法律和公共管理等高风险领域,自动化偏见(AB)——过度依赖自动推荐的倾向——已经成为人类与人工智能合作的一个关键挑战。虽然之前的评论已经研究了传统计算机辅助决策中的人工智能,但对其在现代人工智能驱动的工作环境中的影响的研究仍然有限。为了解决这一差距,本研究系统地调查了AB在这些环境中如何表现以及影响它的认知机制。根据PRISMA 2020指南,我们审查了来自SCOPUS、ScienceDirect、PubMed和b谷歌Scholar的35项同行评议研究。纳入的文献发表于2015年1月至2025年4月,涵盖了认知心理学、人因工程、人机交互和神经科学等领域,为我们的分析提供了跨学科的基础。传统观点将AB归因于对自动化或注意力限制的过度信任,导致用户认为人工智能生成的输出是可靠的。然而,我们的回顾提出了一个更微妙的观点。在证实一些先前的发现的同时,它还揭示了其他相互作用的因素,如人工智能素养、专业知识水平、认知特征、发展信任动态、任务验证需求和解释复杂性。值得注意的是,尽管可解释的人工智能(XAI)和透明度机制旨在减轻AB,但过于技术性、认知要求高,甚至过于简单化的解释可能会无意中加强错误的信任,尤其是在经验不足、人工智能素养较低的专业人士中。综上所述,这些发现表明,尽管解释可能会增加感知系统的可接受性,但它们往往不足以提高决策准确性或减轻AB。相反,用户参与成为最可行和最有效的干预点。由于增加的验证工作已被证明可以减少对人工智能错误建议的自满情绪,我们提出了积极促进批判性参与和独立验证的解释设计策略。这些结论为偏见感知的人工智能开发提供了理论和实践贡献,强调了解释可用性最好由可理解性和适应性等特征支持。
{"title":"Exploring automation bias in human–AI collaboration: a review and implications for explainable AI","authors":"Giuseppe Romeo,&nbsp;Daniela Conti","doi":"10.1007/s00146-025-02422-7","DOIUrl":"10.1007/s00146-025-02422-7","url":null,"abstract":"<div><p>As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public administration, automation bias (AB)—the tendency to over-rely on automated recommendations—has emerged as a critical challenge in human–AI collaboration. While previous reviews have examined AB in traditional computer-assisted decision-making, research on its implications in modern AI-driven work environments remains limited. To address this gap, this research systematically investigates how AB manifests in these settings and the cognitive mechanisms that influence it. Following PRISMA 2020 guidelines, we reviewed 35 peer-reviewed studies from SCOPUS, ScienceDirect, PubMed, and Google Scholar. The included literature, published between January 2015 and April 2025, spans fields such as cognitive psychology, human factors engineering, human–computer interaction, and neuroscience, providing an interdisciplinary foundation for our analysis. Traditional perspectives attribute AB to over-trust in automation or attentional constraints, resulting in users perceiving AI-generated outputs as reliable. However, our review presents a more nuanced view. While confirming some prior findings, it also sheds light on additional interacting factors such as, AI literacy, level of professional expertise, cognitive profile, developmental trust dynamics, task verification demands, and explanation complexity. Notably, although Explainable AI (XAI) and transparency mechanisms are designed to mitigate AB, overly technical, cognitively demanding, or even simplistic explanations may inadvertently reinforce misplaced trust, especially among less experienced professionals with low AI literacy. Taken together, these findings suggest that although explanations may increase perceived system acceptability, they are often insufficient to improve decision accuracy or mitigate AB. Instead, user engagement emerges as the most feasible and impactful point of intervention. As increased verification effort has been shown to reduce complacency toward AI mis-recommendations, we propose explanation design strategies that actively promote critical engagement and independent verification. These conclusions offer both theoretical and practical contributions to bias-aware AI development, underscoring that explanation usability is best supported by features such as understandability and adaptiveness.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"259 - 278"},"PeriodicalIF":4.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02422-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Not just a plus: rethinking the “AI + Education” illusion 不只是加分:重新思考“人工智能+教育”的错觉
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-03 DOI: 10.1007/s00146-025-02458-9
Ruoxin Ritter Wang
{"title":"Not just a plus: rethinking the “AI + Education” illusion","authors":"Ruoxin Ritter Wang","doi":"10.1007/s00146-025-02458-9","DOIUrl":"10.1007/s00146-025-02458-9","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"475 - 476"},"PeriodicalIF":4.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why the confusion matrix fails as a model of knowledge 为什么混淆矩阵不能作为知识模型
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-02 DOI: 10.1007/s00146-025-02456-x
Ian van der Linde
{"title":"Why the confusion matrix fails as a model of knowledge","authors":"Ian van der Linde","doi":"10.1007/s00146-025-02456-x","DOIUrl":"10.1007/s00146-025-02456-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"471 - 472"},"PeriodicalIF":4.7,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The phenomenon of deep nudes—a new threat to children and adults 深度裸体现象——对儿童和成人的新威胁
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-01 DOI: 10.1007/s00146-025-02425-4
Kamil Kopecký, Dominik Voráč

The article explores the misuse of artificial intelligence (AI) to generate pornographic images, including child pornography, through so-called deep nudes—applications that create realistic nude images from photographs without the consent of individuals. This phenomenon has serious psychological and social impacts on victims, especially children, who can become targets of cyberbullying, blackmail and other forms of abuse. The paper presents the results of a survey on the use of artificial intelligence among Czech primary and secondary school students (2024), which involved over 27,336 respondents. Deep nude photos with the help of AI were created by 2.77% of Czech primary and secondary school pupils. Deep nude is more likely to be generated by boys, who are 3.56 times more likely to generate a deep nude photo than girls. There are also differences based on age and type of school, but these differences are negligible.

这篇文章探讨了滥用人工智能(AI)来生成色情图像,包括儿童色情内容,通过所谓的深度裸体应用程序,在未经个人同意的情况下从照片中创建逼真的裸体图像。这一现象对受害者,特别是儿童造成了严重的心理和社会影响,他们可能成为网络欺凌、勒索和其他形式虐待的目标。本文介绍了一项关于捷克中小学生(2024年)使用人工智能的调查结果,该调查涉及超过27,336名受访者。2.77%的捷克中小学生在人工智能的帮助下创作了深度裸照。男生更有可能产生深度裸照,他们产生深度裸照的可能性是女生的3.56倍。年龄和学校类型也有差异,但这些差异可以忽略不计。
{"title":"The phenomenon of deep nudes—a new threat to children and adults","authors":"Kamil Kopecký,&nbsp;Dominik Voráč","doi":"10.1007/s00146-025-02425-4","DOIUrl":"10.1007/s00146-025-02425-4","url":null,"abstract":"<div><p>The article explores the misuse of artificial intelligence (AI) to generate pornographic images, including child pornography, through so-called deep nudes—applications that create realistic nude images from photographs without the consent of individuals. This phenomenon has serious psychological and social impacts on victims, especially children, who can become targets of cyberbullying, blackmail and other forms of abuse. The paper presents the results of a survey on the use of artificial intelligence among Czech primary and secondary school students (2024), which involved over 27,336 respondents. Deep nude photos with the help of AI were created by 2.77% of Czech primary and secondary school pupils. Deep nude is more likely to be generated by boys, who are 3.56 times more likely to generate a deep nude photo than girls. There are also differences based on age and type of school, but these differences are negligible.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"545 - 556"},"PeriodicalIF":4.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02425-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence through the eyes of Hannah Arendt: fear, alienation, and empowerment 汉娜·阿伦特眼中的人工智能:恐惧、疏离和赋权
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-30 DOI: 10.1007/s00146-025-02435-2
Colin Ashruf

Hannah Arendt is known—among the many other contributions to political theory, ethics, and reflections on the human condition—for her analysis on the origins of pre-WWII totalitarianism, but her insights into the history of science and technology, particularly their impact on society and politics, also prove valuable to help put recent developments in artificial intelligence and social media into perspective. In this paper, I extrapolate Arendt’s framework to examine the potential threat artificial intelligence poses to humanity, drawing parallels between contemporary technological advances and those of Arendt’s era, such as nuclear weapons and space exploration. I argue that the fear of artificial intelligence ultimately reflects a deeper fear of humanity itself. I then explore Arendt’s analysis of how the history of science and technology has brought us to a point where our R&D efforts no longer seem to be focused on physical products but rather on intricate processes—in the case of artificial intelligence self-learning algorithms that rely on human input for proper functioning. The scientific method, which spurred the recent scientific revolution and, as a side effect, unleashed an impressive range of technological breakthroughs on society at an ever-accelerating pace, has, through increased consumerism and job automation, added to world-alienation and self-alienation, culminating, in turn, in a society of increasingly isolated individuals that are vulnerable to populism and authoritarianism. In line with Arendt, I contend, however, that negative outcomes are not inherent to scientific and technological advancement. While social media and artificial intelligence can be used for surveillance, control, and the spreading of misinformation and hate, as we sometimes see today, they can equally be used to counter world- and self-alienation. These technologies hold the potential, for instance, to enhance education in the humanities, uphold the boundaries between science and technology and politics, and make democratic processes swifter, more direct, and more transparent, thereby reinforcing participatory democracy and fostering a more engaged and connected society.

汉娜·阿伦特(Hannah Arendt)在政治理论、伦理学和对人类状况的反思方面做出了许多其他贡献,她以对二战前极权主义起源的分析而闻名,但她对科技历史的见解,尤其是对社会和政治的影响,也证明了她对人工智能和社交媒体的最新发展有价值。在本文中,我推断了阿伦特的框架,以检查人工智能对人类构成的潜在威胁,并将当代技术进步与阿伦特时代的技术进步(如核武器和太空探索)进行了比较。我认为,对人工智能的恐惧最终反映了对人类自身更深层次的恐惧。然后,我探讨了阿伦特的分析,即科学和技术的历史如何将我们带到了这样一个地步:我们的研发努力似乎不再集中在物理产品上,而是集中在复杂的过程上——在人工智能的例子中,自我学习算法依赖于人类的输入来正常运作。科学方法刺激了最近的科学革命,作为一个副作用,它以不断加快的速度在社会上释放了一系列令人印象深刻的技术突破,通过增加消费主义和工作自动化,加剧了世界异化和自我异化,最终导致社会中越来越孤立的个人容易受到民粹主义和威权主义的影响。然而,与阿伦特的观点一致,我认为负面结果并不是科技进步所固有的。正如我们今天有时看到的那样,社交媒体和人工智能可以用于监视、控制和传播错误信息和仇恨,但它们同样可以用来对抗世界和自我异化。例如,这些技术具有加强人文教育的潜力,维护科学技术与政治之间的界限,使民主进程更快、更直接、更透明,从而加强参与式民主,培养一个更参与、更紧密联系的社会。
{"title":"Artificial intelligence through the eyes of Hannah Arendt: fear, alienation, and empowerment","authors":"Colin Ashruf","doi":"10.1007/s00146-025-02435-2","DOIUrl":"10.1007/s00146-025-02435-2","url":null,"abstract":"<div><p>Hannah Arendt is known—among the many other contributions to political theory, ethics, and reflections on the human condition—for her analysis on the origins of pre-WWII totalitarianism, but her insights into the history of science and technology, particularly their impact on society and politics, also prove valuable to help put recent developments in artificial intelligence and social media into perspective. In this paper, I extrapolate Arendt’s framework to examine the potential threat artificial intelligence poses to humanity, drawing parallels between contemporary technological advances and those of Arendt’s era, such as nuclear weapons and space exploration. I argue that the fear of artificial intelligence ultimately reflects a deeper fear of humanity itself. I then explore Arendt’s analysis of how the history of science and technology has brought us to a point where our R&amp;D efforts no longer seem to be focused on physical products but rather on intricate processes—in the case of artificial intelligence self-learning algorithms that rely on human input for proper functioning. The scientific method, which spurred the recent scientific revolution and, as a side effect, unleashed an impressive range of technological breakthroughs on society at an ever-accelerating pace, has, through increased consumerism and job automation, added to world-alienation and self-alienation, culminating, in turn, in a society of increasingly isolated individuals that are vulnerable to populism and authoritarianism. In line with Arendt, I contend, however, that negative outcomes are not inherent to scientific and technological advancement. While social media and artificial intelligence can be used for surveillance, control, and the spreading of misinformation and hate, as we sometimes see today, they can equally be used to counter world- and self-alienation. These technologies hold the potential, for instance, to enhance education in the humanities, uphold the boundaries between science and technology and politics, and make democratic processes swifter, more direct, and more transparent, thereby reinforcing participatory democracy and fostering a more engaged and connected society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"455 - 462"},"PeriodicalIF":4.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity 在人工智能时代解密真实性:人工智能生成的虚假图像和人工智能检测工具如何影响对真实性的判断
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-29 DOI: 10.1007/s00146-025-02416-5
Aqsa Farooq, Claes de Vreese

An ongoing surge of Artificial Intelligence (AI)-enabled false content has been spreading its way through the information ecosystem, including AI-generated images, which have been used as part of political disinformation campaigns. Thus, there remains a pressing need to understand which factors individuals rely upon when determining whether images are AI-generated, particularly when they can be used to spread disinformation. AI-generated images have been characterised by their aesthetic realism, which can be leveraged to deceive users, and those who use generative AI to create deceptive content also tend to exploit its ability to convey and elicit emotion. This experimental study explored how aesthetic realism and emotional salience, as key features of both AI-generated content and disinformation, may influence authenticity judgements of AI-generated disinformation images. In this study, 292 UK-based participants were presented with both AI-generated and non-AI-generated disinformation images which varied in aesthetic realism and emotional salience. Results showed that participants were more likely to judge realistic-looking AI-generated images as being authentic compared with less realistic-looking AI-generated images, but did so with less confidence in their decision. Emotional salience was not a significant predictor of judgements. When participants were presented with the correct verdict of an AI detection tool, their reliance on the tool to update their own judgements was predicted by the aesthetic realism of the image and their confidence levels. These findings may assist with the development of disinformation detection tools, as well as strategies that mitigate the spread of deceptive, synthesised visual content in the digital age.

人工智能(AI)支持的虚假内容正在通过信息生态系统传播,其中包括人工智能生成的图像,这些图像已被用作政治虚假信息运动的一部分。因此,在确定图像是否是人工智能生成时,特别是当它们可以用来传播虚假信息时,仍然迫切需要了解个人所依赖的因素。人工智能生成的图像的特点是它们的审美现实主义,可以用来欺骗用户,而那些使用生成式人工智能创建欺骗性内容的人也倾向于利用其传达和引发情感的能力。本实验研究探讨了作为人工智能生成的内容和虚假信息的关键特征,审美现实主义和情感显著性如何影响人工智能生成的虚假信息图像的真实性判断。在这项研究中,292名英国参与者被展示了人工智能生成的和非人工智能生成的虚假信息图像,这些图像在审美现实主义和情感显著性方面有所不同。结果显示,与看起来不太真实的人工智能生成的图像相比,参与者更有可能将看起来逼真的人工智能生成的图像判断为真实的,但他们对自己的决定缺乏信心。情绪显著性并不是判断的显著预测因子。当参与者看到人工智能检测工具的正确判断时,他们对工具更新自己判断的依赖程度可以通过图像的美学真实感和他们的信心水平来预测。这些发现可能有助于开发虚假信息检测工具,以及在数字时代减轻欺骗性、合成视觉内容传播的策略。
{"title":"Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity","authors":"Aqsa Farooq,&nbsp;Claes de Vreese","doi":"10.1007/s00146-025-02416-5","DOIUrl":"10.1007/s00146-025-02416-5","url":null,"abstract":"<div><p>An ongoing surge of Artificial Intelligence (AI)-enabled false content has been spreading its way through the information ecosystem, including AI-generated images, which have been used as part of political disinformation campaigns. Thus, there remains a pressing need to understand which factors individuals rely upon when determining whether images are AI-generated, particularly when they can be used to spread disinformation. AI-generated images have been characterised by their aesthetic realism, which can be leveraged to deceive users, and those who use generative AI to create deceptive content also tend to exploit its ability to convey and elicit emotion. This experimental study explored how aesthetic realism and emotional salience, as key features of both AI-generated content and disinformation, may influence authenticity judgements of AI-generated disinformation images. In this study, 292 UK-based participants were presented with both AI-generated and non-AI-generated disinformation images which varied in aesthetic realism and emotional salience. Results showed that participants were more likely to judge realistic-looking AI-generated images as being authentic compared with less realistic-looking AI-generated images, but did so with less confidence in their decision. Emotional salience was not a significant predictor of judgements. When participants were presented with the correct verdict of an AI detection tool, their reliance on the tool to update their own judgements was predicted by the aesthetic realism of the image and their confidence levels. These findings may assist with the development of disinformation detection tools, as well as strategies that mitigate the spread of deceptive, synthesised visual content in the digital age.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"493 - 504"},"PeriodicalIF":4.7,"publicationDate":"2025-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02416-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical aspects of AI use in the circular economy 在循环经济中使用人工智能的伦理问题
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-29 DOI: 10.1007/s00146-025-02436-1
Iryna Bashynska

Artificial intelligence (AI) is increasingly applied to enable circular economy (CE) models by optimizing resource use, product design, waste management, and recycling. However, alongside potential environmental and economic benefits, the deployment of AI in circular systems raises significant ethical concerns that can influence real-world adoption of CE principles. This review critically examines key ethical issues at the intersection of AI and the CE, drawing on recent literature, case studies, and policy frameworks. We identify and discuss themes including algorithmic transparency and explainability, data privacy and bias, impacts on labor and employment, social inclusion and fairness, responsible AI deployment, and the role of human-in-the-loop oversight. We synthesize insights from academic studies, industry examples, and governance initiatives (e.g. the EU AI Act and OECD AI Principles) to illuminate how these ethical challenges affect the implementation of circular economy practices. Our analysis finds that issues like opaque algorithms, biased data, workforce displacement, and unequal access can undermine trust and equity in AI-driven circular solutions, thereby impeding their societal acceptance. Conversely, emerging principles of responsible AI—emphasizing transparency, accountability, fairness, and human oversight—offer pathways to mitigate risks and foster more inclusive, trustworthy circular economy transitions. The review concludes with recommendations for policymakers, organizations, and practitioners on aligning AI ethics with circular economy goals, highlighting the need for interdisciplinary collaboration to ensure that AI contributes to a sustainable and just circular future.

人工智能(AI)通过优化资源利用、产品设计、废物管理和回收,越来越多地应用于实现循环经济(CE)模式。然而,除了潜在的环境和经济效益外,在循环系统中部署人工智能还引发了重大的伦理问题,可能会影响现实世界中对CE原则的采用。本文借鉴最新文献、案例研究和政策框架,批判性地审视了人工智能和行政长官交叉领域的关键伦理问题。我们确定并讨论的主题包括算法透明度和可解释性、数据隐私和偏见、对劳动和就业的影响、社会包容和公平、负责任的人工智能部署以及人在环监督的作用。我们综合了学术研究、行业实例和治理举措(例如欧盟人工智能法案和经合组织人工智能原则)的见解,以阐明这些道德挑战如何影响循环经济实践的实施。我们的分析发现,不透明的算法、有偏见的数据、劳动力流离失所和不平等的机会等问题会破坏人工智能驱动的循环解决方案中的信任和公平,从而阻碍它们的社会接受度。相反,新兴的负责任人工智能原则——强调透明度、问责制、公平和人为监督——为降低风险和促进更包容、更值得信赖的循环经济转型提供了途径。报告最后为政策制定者、组织和从业者提出了将人工智能伦理与循环经济目标相结合的建议,强调了跨学科合作的必要性,以确保人工智能为可持续和公正的循环未来做出贡献。
{"title":"Ethical aspects of AI use in the circular economy","authors":"Iryna Bashynska","doi":"10.1007/s00146-025-02436-1","DOIUrl":"10.1007/s00146-025-02436-1","url":null,"abstract":"<div><p>Artificial intelligence (AI) is increasingly applied to enable circular economy (CE) models by optimizing resource use, product design, waste management, and recycling. However, alongside potential environmental and economic benefits, the deployment of AI in circular systems raises significant ethical concerns that can influence real-world adoption of CE principles. This review critically examines key ethical issues at the intersection of AI and the CE, drawing on recent literature, case studies, and policy frameworks. We identify and discuss themes including algorithmic transparency and explainability, data privacy and bias, impacts on labor and employment, social inclusion and fairness, responsible AI deployment, and the role of human-in-the-loop oversight. We synthesize insights from academic studies, industry examples, and governance initiatives (e.g. the EU AI Act and OECD AI Principles) to illuminate how these ethical challenges affect the implementation of circular economy practices. Our analysis finds that issues like opaque algorithms, biased data, workforce displacement, and unequal access can undermine trust and equity in AI-driven circular solutions, thereby impeding their societal acceptance. Conversely, emerging principles of responsible AI—emphasizing transparency, accountability, fairness, and human oversight—offer pathways to mitigate risks and foster more inclusive, trustworthy circular economy transitions. The review concludes with recommendations for policymakers, organizations, and practitioners on aligning AI ethics with circular economy goals, highlighting the need for interdisciplinary collaboration to ensure that AI contributes to a sustainable and just circular future.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"575 - 593"},"PeriodicalIF":4.7,"publicationDate":"2025-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02436-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention is all they need: cognitive science and the (techno)political economy of attention in humans and machines 注意力是他们所需要的:认知科学和(技术)政治经济学的注意力在人类和机器
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-28 DOI: 10.1007/s00146-025-02400-z
Pablo González de la Torre, Marta Pérez-Verdugo, Xabier E. Barandiaran

This paper critically analyses the “attention economy” within the framework of cognitive science and techno-political economics, as applied to both human and machine interactions. We explore how current business models, particularly in digital platform capitalism, harness user engagement by strategically shaping attentional patterns. These platforms utilize advanced AI and massive data analytics to enhance user engagement, creating a cycle of attention capture and data extraction. We review contemporary (neuro)cognitive theories of attention and platform engagement design techniques and criticize classical cognitivist and behaviourist theories for their inadequacies in addressing the potential harms of such engagement on user autonomy and wellbeing. 4E approaches to cognitive science, instead, emphasizing the embodied, extended, enactive, and ecological aspects of cognition, offer us an intrinsic normative standpoint and a more integrated understanding of how attentional patterns are actively constituted by adaptive digital environments. By examining the precarious nature of habit formation in digital contexts, we reveal the techno-economic underpinnings that threaten personal autonomy by disaggregating habits away from the individual, into an AI managed collection of behavioural patterns. Our current predicament suggests the necessity of a paradigm shift towards an ecology of attention. This shift aims to foster environments that respect and preserve human cognitive and social capacities, countering the exploitative tendencies of cognitive capitalism.

本文在认知科学和技术政治经济学的框架内批判性地分析了“注意力经济”,并将其应用于人机交互。我们探讨了当前的商业模式,特别是在数字平台资本主义中,如何通过战略性地塑造注意力模式来利用用户参与度。这些平台利用先进的人工智能和海量数据分析来提高用户参与度,创造了一个注意力捕获和数据提取的循环。我们回顾了关注和平台参与设计技术的当代(神经)认知理论,并批评了经典认知主义和行为主义理论在解决这种参与对用户自主性和幸福感的潜在危害方面的不足。相反,认知科学的4E方法强调认知的具体、扩展、行动和生态方面,为我们提供了一个内在规范的立场,并对适应性数字环境如何主动构成注意模式有了更全面的理解。通过研究数字环境中习惯形成的不稳定性,我们揭示了技术经济基础,这些基础将习惯从个人分解为人工智能管理的行为模式集合,从而威胁到个人自主性。我们目前的困境表明,有必要向注意力生态转变范式。这种转变旨在营造尊重和保护人类认知和社会能力的环境,对抗认知资本主义的剥削倾向。
{"title":"Attention is all they need: cognitive science and the (techno)political economy of attention in humans and machines","authors":"Pablo González de la Torre,&nbsp;Marta Pérez-Verdugo,&nbsp;Xabier E. Barandiaran","doi":"10.1007/s00146-025-02400-z","DOIUrl":"10.1007/s00146-025-02400-z","url":null,"abstract":"<div><p>This paper critically analyses the “attention economy” within the framework of cognitive science and techno-political economics, as applied to both human and machine interactions. We explore how current business models, particularly in digital platform capitalism, harness user engagement by strategically shaping attentional patterns. These platforms utilize advanced AI and massive data analytics to enhance user engagement, creating a cycle of attention capture and data extraction. We review contemporary (neuro)cognitive theories of attention and platform engagement design techniques and criticize classical cognitivist and behaviourist theories for their inadequacies in addressing the potential harms of such engagement on user autonomy and wellbeing. 4E approaches to cognitive science, instead, emphasizing the embodied, extended, enactive, and ecological aspects of cognition, offer us an intrinsic normative standpoint and a more integrated understanding of how attentional patterns are actively constituted by adaptive digital environments. By examining the precarious nature of habit formation in digital contexts, we reveal the techno-economic underpinnings that threaten personal autonomy by disaggregating habits away from the individual, into an AI managed collection of behavioural patterns. Our current predicament suggests the necessity of a paradigm shift towards an ecology of attention. This shift aims to foster environments that respect and preserve human cognitive and social capacities, countering the exploitative tendencies of cognitive capitalism.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"5 - 21"},"PeriodicalIF":4.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02400-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The epistemological consequences of large language models: rethinking collective intelligence and institutional knowledge 大型语言模型的认识论结果:重新思考集体智慧和制度知识
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-28 DOI: 10.1007/s00146-025-02426-3
Angjelin Hila
<div><p>In this paper, we interrogate the epistemological implications of human–LLM interaction with a specific focus on epistemological threats. We develop a theory of epistemic justification that synthesizes internalist and externalist conceptions of epistemic warrant termed collective epistemology. Collective epistemology considers the way epistemological warrant is distributed across human collectives. In pursuing this line of thinking, we take bounded rationality and dual-process theory as background assumptions in our analysis of collective epistemology as a mechanism of collective rationality. Following this approach, we distinguish between internalist justification as a robust standard of rationality and externalist justification as a reliable knowledge transmission mechanism. We argue that while these standards jointly constitute necessary and sufficient conditions for collective rationality, only internalist justification produces knowledge. We posit that reflective knowledge entails three necessary and sufficient conditions: a) rational agents reflectively understand the basis on which a proposition is evaluated as true b) in absence of a reflective evaluative basis for a proposition, rational agents consistently evaluate the reliability of truth sources, and c) rational agents have an epistemic duty to apply a) and b) as rational standards in their domains of competence. Since distributed rationality is socially scaffolded, we pursue the consequences of unchecked human–LLM interaction on social epistemic chains of dependence. We argue that LLMs approximate a type of externalist justification termed reliabilism but do not instantiate internalist standards of justification. Specifically, we argue that LLMs do not possess reflective justification for the information they produce but rather reliably transmit information whose reflective basis has been established in advance. Since LLMs cannot produce knowledge with reflective justifiedness but only reliabilist justifiedness, we argue that human outsourcing of reflective knowledge to reliable LLM information threatens to erode reflective standards of justification at scale. As a result, LLM information reliability disincentivizes comprehension and understanding in human agents. Human agents that forfeit comprehension and understanding for reliably correct results reduce the net justifiedness of their own beliefs and, consequently, reduce their ability to perform their epistemic duties professionally and civically. The scaled outsourcing of reflective knowledge to LLMs across collectives threatens the impoverishment of the production of reflective knowledge. To mitigate these potential threats, we propose developing epistemic norms across three tiers of social organization: a) normative epistemic model for individual human–LLM interaction, b) norm setting through institutional and organizational frameworks and c) the imposition of deontic constraints at organizational and/or legislative lev
在本文中,我们询问了人类与法学硕士互动的认识论含义,并特别关注认识论威胁。我们发展了一种综合了内部主义和外部主义认识论保证概念的认识论论证理论,称为集体认识论。集体认识论考虑的是认识论依据在人类集体中分布的方式。在追求这一思路的过程中,我们以有限理性和双过程理论作为背景假设来分析作为集体理性机制的集体认识论。根据这种方法,我们区分了作为理性的稳健标准的内部主义辩护和作为可靠的知识传递机制的外部主义辩护。我们认为,虽然这些标准共同构成了集体理性的必要和充分条件,但只有内部主义的论证才能产生知识。我们假设反思性知识需要三个必要和充分条件:a)理性主体反思性地理解命题被评估为真所依据的基础;b)在缺乏命题的反思性评估基础的情况下,理性主体始终如一地评估真理来源的可靠性;c)理性主体有认识论义务,将a)和b)作为其能力领域的理性标准。由于分布式理性是社会脚手架,我们追求不受约束的人类-法学硕士互动对社会认知依赖链的后果。我们认为,法学硕士近似于一种称为可靠性的外部主义论证,但并未实例化内部主义的论证标准。具体而言,我们认为法学硕士对其产生的信息不具有反思性的理由,而是可靠地传递事先建立了反思性基础的信息。由于法学硕士不能产生具有反思性正当性的知识,而只能产生可靠性正当性的知识,我们认为,人类将反思性知识外包给可靠的法学硕士信息,可能会在规模上侵蚀反思性正当性标准。因此,LLM信息可靠性阻碍了人类智能体的理解和理解。丧失对可靠正确结果的理解和理解的人类代理人降低了他们自己信念的净正当性,从而降低了他们在专业和公民方面履行认知职责的能力。反思性知识的大规模外包给跨集体的法学硕士,威胁着反思性知识生产的贫困。为了减轻这些潜在的威胁,我们建议在社会组织的三个层次上发展认知规范:a)个人与法学硕士互动的规范性认知模型,b)通过制度和组织框架制定规范,以及c)在组织和/或立法层面施加道义约束,以灌输法学硕士话语规范,减少认知上的弊端。
{"title":"The epistemological consequences of large language models: rethinking collective intelligence and institutional knowledge","authors":"Angjelin Hila","doi":"10.1007/s00146-025-02426-3","DOIUrl":"10.1007/s00146-025-02426-3","url":null,"abstract":"&lt;div&gt;&lt;p&gt;In this paper, we interrogate the epistemological implications of human–LLM interaction with a specific focus on epistemological threats. We develop a theory of epistemic justification that synthesizes internalist and externalist conceptions of epistemic warrant termed collective epistemology. Collective epistemology considers the way epistemological warrant is distributed across human collectives. In pursuing this line of thinking, we take bounded rationality and dual-process theory as background assumptions in our analysis of collective epistemology as a mechanism of collective rationality. Following this approach, we distinguish between internalist justification as a robust standard of rationality and externalist justification as a reliable knowledge transmission mechanism. We argue that while these standards jointly constitute necessary and sufficient conditions for collective rationality, only internalist justification produces knowledge. We posit that reflective knowledge entails three necessary and sufficient conditions: a) rational agents reflectively understand the basis on which a proposition is evaluated as true b) in absence of a reflective evaluative basis for a proposition, rational agents consistently evaluate the reliability of truth sources, and c) rational agents have an epistemic duty to apply a) and b) as rational standards in their domains of competence. Since distributed rationality is socially scaffolded, we pursue the consequences of unchecked human–LLM interaction on social epistemic chains of dependence. We argue that LLMs approximate a type of externalist justification termed reliabilism but do not instantiate internalist standards of justification. Specifically, we argue that LLMs do not possess reflective justification for the information they produce but rather reliably transmit information whose reflective basis has been established in advance. Since LLMs cannot produce knowledge with reflective justifiedness but only reliabilist justifiedness, we argue that human outsourcing of reflective knowledge to reliable LLM information threatens to erode reflective standards of justification at scale. As a result, LLM information reliability disincentivizes comprehension and understanding in human agents. Human agents that forfeit comprehension and understanding for reliably correct results reduce the net justifiedness of their own beliefs and, consequently, reduce their ability to perform their epistemic duties professionally and civically. The scaled outsourcing of reflective knowledge to LLMs across collectives threatens the impoverishment of the production of reflective knowledge. To mitigate these potential threats, we propose developing epistemic norms across three tiers of social organization: a) normative epistemic model for individual human–LLM interaction, b) norm setting through institutional and organizational frameworks and c) the imposition of deontic constraints at organizational and/or legislative lev","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"79 - 97"},"PeriodicalIF":4.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the attention economy, towards an ecology of attending. A manifesto 超越注意力经济,走向关注的生态。一个宣言
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-28 DOI: 10.1007/s00146-025-02405-8
Gunter Bombaerts, Tom Hannes, Martin Adam, Alessandra Aloisi, Joel Anderson, P. Sven Arvidson, Lawrence Berger, Stefano Davide Bettera, Enrico Campo, Laura Candiotto, Silvia Caprioglio Panizza, Anna Ciaunica, Yves Citton, Diego D´Angelo, Matthew J. Dennis, Natalie Depraz, Peter Doran, Wolfgang Drechsler, William Edelglass, Iris Eisenberger, Mark Fortney, Beverley Foulks McGuire, Antony Fredriksson, Peter D. Hershock, Soraj Hongladarom, Wijnand IJsselsteijn, Beth Jacobs, Gabor Karsai, Steven Laureys, Thomas Taro Lennerfors, Jeanne Lim, Chien-Te Lin, William Lamson, Mark Losoncz, David Loy, Lavinia Marin, Bence Peter Marosan, Chiara Mascarello, David L. McMahan, Jin Y. Park, Nina Petek, Anna Puzio, Katrien Schaubroeck, Shobhit Shakya, Juewei Shi, Elizaveta Solomonova, Francesco Tormen, Jitendra Uttam, Marieke van Vugt, Sebastjan Vörös, Maren Wehrle, Galit Wellner, Jason M. Wirth, Olaf Witkowski, Apiradee Wongkitrungrueng, Dale S. Wright, Hin Sing Yuen, Yutong Zheng

We endorse policymakers’ efforts to address the negative consequences of the attention economy’s technology but add that these approaches are often limited in their criticism of the systemic context of human attention. Starting from Buddhist philosophy, we advocate a broader approach: an ‘ecology of attending’ that centers on conceptualizing, designing, and using attention (1) in an embedded way and (2) focused on the alleviating of suffering. With ‘embedded’ we mean that attention is not a neutral, isolated mechanism but a meaning-engendering part of an ‘ecology’ of bodily, sociotechnical and moral frameworks. With ‘focused on the alleviation of suffering’ we mean that we explicitly move away from the (often implicit) conception of attention as a tool for gratifying desires. We analyze existing inquiries in these directions and urge them to be intensified and integrated. As to the design and function of our technological environment, we propose three questions for further research: How can technology help to acknowledge us as ‘ecological’ beings, rather than as self-sufficient individuals? How can technology help to raise awareness of our moral framework? And how can technology increase the conditions for ‘attending’ to the alleviation of suffering, by substituting our covert self-driven moral framework with an ecologically attending one? We believe in the urgency of transforming the inhumane attention economy sociotechnical system into a humane ecology of attending, and in our ability to contribute to it.

我们赞同政策制定者为解决注意力经济技术的负面影响所做的努力,但补充说,这些方法在批评人类注意力的系统背景时往往受到限制。从佛教哲学出发,我们提倡一种更广泛的方法:一种“关注的生态”,以概念化、设计和使用注意力为中心(1)以一种嵌入的方式;(2)专注于减轻痛苦。“嵌入”的意思是,注意力不是一种中立的、孤立的机制,而是身体、社会技术和道德框架的“生态”中产生意义的一部分。通过“专注于减轻痛苦”,我们的意思是我们明确地摆脱了将注意力作为满足欲望的工具的(通常是隐含的)概念。我们分析了这些方向的现有询问,并敦促它们加强和整合。关于我们的技术环境的设计和功能,我们提出了三个有待进一步研究的问题:技术如何帮助我们认识到我们是“生态”生物,而不是自给自足的个体?科技如何帮助提高我们对道德框架的认识?技术如何通过用生态关怀的道德框架取代我们隐蔽的自我驱动的道德框架,来增加“关注”减轻痛苦的条件?我们相信将非人道的关注经济社会技术系统转变为关注的人道生态的紧迫性,并相信我们有能力为此做出贡献。
{"title":"Beyond the attention economy, towards an ecology of attending. A manifesto","authors":"Gunter Bombaerts,&nbsp;Tom Hannes,&nbsp;Martin Adam,&nbsp;Alessandra Aloisi,&nbsp;Joel Anderson,&nbsp;P. Sven Arvidson,&nbsp;Lawrence Berger,&nbsp;Stefano Davide Bettera,&nbsp;Enrico Campo,&nbsp;Laura Candiotto,&nbsp;Silvia Caprioglio Panizza,&nbsp;Anna Ciaunica,&nbsp;Yves Citton,&nbsp;Diego D´Angelo,&nbsp;Matthew J. Dennis,&nbsp;Natalie Depraz,&nbsp;Peter Doran,&nbsp;Wolfgang Drechsler,&nbsp;William Edelglass,&nbsp;Iris Eisenberger,&nbsp;Mark Fortney,&nbsp;Beverley Foulks McGuire,&nbsp;Antony Fredriksson,&nbsp;Peter D. Hershock,&nbsp;Soraj Hongladarom,&nbsp;Wijnand IJsselsteijn,&nbsp;Beth Jacobs,&nbsp;Gabor Karsai,&nbsp;Steven Laureys,&nbsp;Thomas Taro Lennerfors,&nbsp;Jeanne Lim,&nbsp;Chien-Te Lin,&nbsp;William Lamson,&nbsp;Mark Losoncz,&nbsp;David Loy,&nbsp;Lavinia Marin,&nbsp;Bence Peter Marosan,&nbsp;Chiara Mascarello,&nbsp;David L. McMahan,&nbsp;Jin Y. Park,&nbsp;Nina Petek,&nbsp;Anna Puzio,&nbsp;Katrien Schaubroeck,&nbsp;Shobhit Shakya,&nbsp;Juewei Shi,&nbsp;Elizaveta Solomonova,&nbsp;Francesco Tormen,&nbsp;Jitendra Uttam,&nbsp;Marieke van Vugt,&nbsp;Sebastjan Vörös,&nbsp;Maren Wehrle,&nbsp;Galit Wellner,&nbsp;Jason M. Wirth,&nbsp;Olaf Witkowski,&nbsp;Apiradee Wongkitrungrueng,&nbsp;Dale S. Wright,&nbsp;Hin Sing Yuen,&nbsp;Yutong Zheng","doi":"10.1007/s00146-025-02405-8","DOIUrl":"10.1007/s00146-025-02405-8","url":null,"abstract":"<div><p>We endorse policymakers’ efforts to address the negative consequences of the attention economy’s technology but add that these approaches are often limited in their criticism of the systemic context of human attention. Starting from Buddhist philosophy, we advocate a broader approach: an ‘ecology of attending’ that centers on conceptualizing, designing, and using attention (1) in an embedded way and (2) focused on the alleviating of suffering. With ‘embedded’ we mean that attention is not a neutral, isolated mechanism but a meaning-engendering part of an ‘ecology’ of bodily, sociotechnical and moral frameworks. With ‘focused on the alleviation of suffering’ we mean that we explicitly move away from the (often implicit) conception of attention as a tool for gratifying desires. We analyze existing inquiries in these directions and urge them to be intensified and integrated. As to the design and function of our technological environment, we propose three questions for further research: How can technology help to acknowledge us as ‘ecological’ beings, rather than as self-sufficient individuals? How can technology help to raise awareness of our moral framework? And how can technology increase the conditions for ‘attending’ to the alleviation of suffering, by substituting our covert self-driven moral framework with an ecologically attending one? We believe in the urgency of transforming the inhumane attention economy sociotechnical system into a humane ecology of attending, and in our ability to contribute to it.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"477 - 492"},"PeriodicalIF":4.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02405-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1