Pub Date : 2025-07-03DOI: 10.1007/s00146-025-02422-7
Giuseppe Romeo, Daniela Conti
As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public administration, automation bias (AB)—the tendency to over-rely on automated recommendations—has emerged as a critical challenge in human–AI collaboration. While previous reviews have examined AB in traditional computer-assisted decision-making, research on its implications in modern AI-driven work environments remains limited. To address this gap, this research systematically investigates how AB manifests in these settings and the cognitive mechanisms that influence it. Following PRISMA 2020 guidelines, we reviewed 35 peer-reviewed studies from SCOPUS, ScienceDirect, PubMed, and Google Scholar. The included literature, published between January 2015 and April 2025, spans fields such as cognitive psychology, human factors engineering, human–computer interaction, and neuroscience, providing an interdisciplinary foundation for our analysis. Traditional perspectives attribute AB to over-trust in automation or attentional constraints, resulting in users perceiving AI-generated outputs as reliable. However, our review presents a more nuanced view. While confirming some prior findings, it also sheds light on additional interacting factors such as, AI literacy, level of professional expertise, cognitive profile, developmental trust dynamics, task verification demands, and explanation complexity. Notably, although Explainable AI (XAI) and transparency mechanisms are designed to mitigate AB, overly technical, cognitively demanding, or even simplistic explanations may inadvertently reinforce misplaced trust, especially among less experienced professionals with low AI literacy. Taken together, these findings suggest that although explanations may increase perceived system acceptability, they are often insufficient to improve decision accuracy or mitigate AB. Instead, user engagement emerges as the most feasible and impactful point of intervention. As increased verification effort has been shown to reduce complacency toward AI mis-recommendations, we propose explanation design strategies that actively promote critical engagement and independent verification. These conclusions offer both theoretical and practical contributions to bias-aware AI development, underscoring that explanation usability is best supported by features such as understandability and adaptiveness.
{"title":"Exploring automation bias in human–AI collaboration: a review and implications for explainable AI","authors":"Giuseppe Romeo, Daniela Conti","doi":"10.1007/s00146-025-02422-7","DOIUrl":"10.1007/s00146-025-02422-7","url":null,"abstract":"<div><p>As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public administration, automation bias (AB)—the tendency to over-rely on automated recommendations—has emerged as a critical challenge in human–AI collaboration. While previous reviews have examined AB in traditional computer-assisted decision-making, research on its implications in modern AI-driven work environments remains limited. To address this gap, this research systematically investigates how AB manifests in these settings and the cognitive mechanisms that influence it. Following PRISMA 2020 guidelines, we reviewed 35 peer-reviewed studies from SCOPUS, ScienceDirect, PubMed, and Google Scholar. The included literature, published between January 2015 and April 2025, spans fields such as cognitive psychology, human factors engineering, human–computer interaction, and neuroscience, providing an interdisciplinary foundation for our analysis. Traditional perspectives attribute AB to over-trust in automation or attentional constraints, resulting in users perceiving AI-generated outputs as reliable. However, our review presents a more nuanced view. While confirming some prior findings, it also sheds light on additional interacting factors such as, AI literacy, level of professional expertise, cognitive profile, developmental trust dynamics, task verification demands, and explanation complexity. Notably, although Explainable AI (XAI) and transparency mechanisms are designed to mitigate AB, overly technical, cognitively demanding, or even simplistic explanations may inadvertently reinforce misplaced trust, especially among less experienced professionals with low AI literacy. Taken together, these findings suggest that although explanations may increase perceived system acceptability, they are often insufficient to improve decision accuracy or mitigate AB. Instead, user engagement emerges as the most feasible and impactful point of intervention. As increased verification effort has been shown to reduce complacency toward AI mis-recommendations, we propose explanation design strategies that actively promote critical engagement and independent verification. These conclusions offer both theoretical and practical contributions to bias-aware AI development, underscoring that explanation usability is best supported by features such as understandability and adaptiveness.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"259 - 278"},"PeriodicalIF":4.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02422-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.1007/s00146-025-02456-x
Ian van der Linde
{"title":"Why the confusion matrix fails as a model of knowledge","authors":"Ian van der Linde","doi":"10.1007/s00146-025-02456-x","DOIUrl":"10.1007/s00146-025-02456-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"471 - 472"},"PeriodicalIF":4.7,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1007/s00146-025-02425-4
Kamil Kopecký, Dominik Voráč
The article explores the misuse of artificial intelligence (AI) to generate pornographic images, including child pornography, through so-called deep nudes—applications that create realistic nude images from photographs without the consent of individuals. This phenomenon has serious psychological and social impacts on victims, especially children, who can become targets of cyberbullying, blackmail and other forms of abuse. The paper presents the results of a survey on the use of artificial intelligence among Czech primary and secondary school students (2024), which involved over 27,336 respondents. Deep nude photos with the help of AI were created by 2.77% of Czech primary and secondary school pupils. Deep nude is more likely to be generated by boys, who are 3.56 times more likely to generate a deep nude photo than girls. There are also differences based on age and type of school, but these differences are negligible.
{"title":"The phenomenon of deep nudes—a new threat to children and adults","authors":"Kamil Kopecký, Dominik Voráč","doi":"10.1007/s00146-025-02425-4","DOIUrl":"10.1007/s00146-025-02425-4","url":null,"abstract":"<div><p>The article explores the misuse of artificial intelligence (AI) to generate pornographic images, including child pornography, through so-called deep nudes—applications that create realistic nude images from photographs without the consent of individuals. This phenomenon has serious psychological and social impacts on victims, especially children, who can become targets of cyberbullying, blackmail and other forms of abuse. The paper presents the results of a survey on the use of artificial intelligence among Czech primary and secondary school students (2024), which involved over 27,336 respondents. Deep nude photos with the help of AI were created by 2.77% of Czech primary and secondary school pupils. Deep nude is more likely to be generated by boys, who are 3.56 times more likely to generate a deep nude photo than girls. There are also differences based on age and type of school, but these differences are negligible.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"545 - 556"},"PeriodicalIF":4.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02425-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-30DOI: 10.1007/s00146-025-02435-2
Colin Ashruf
Hannah Arendt is known—among the many other contributions to political theory, ethics, and reflections on the human condition—for her analysis on the origins of pre-WWII totalitarianism, but her insights into the history of science and technology, particularly their impact on society and politics, also prove valuable to help put recent developments in artificial intelligence and social media into perspective. In this paper, I extrapolate Arendt’s framework to examine the potential threat artificial intelligence poses to humanity, drawing parallels between contemporary technological advances and those of Arendt’s era, such as nuclear weapons and space exploration. I argue that the fear of artificial intelligence ultimately reflects a deeper fear of humanity itself. I then explore Arendt’s analysis of how the history of science and technology has brought us to a point where our R&D efforts no longer seem to be focused on physical products but rather on intricate processes—in the case of artificial intelligence self-learning algorithms that rely on human input for proper functioning. The scientific method, which spurred the recent scientific revolution and, as a side effect, unleashed an impressive range of technological breakthroughs on society at an ever-accelerating pace, has, through increased consumerism and job automation, added to world-alienation and self-alienation, culminating, in turn, in a society of increasingly isolated individuals that are vulnerable to populism and authoritarianism. In line with Arendt, I contend, however, that negative outcomes are not inherent to scientific and technological advancement. While social media and artificial intelligence can be used for surveillance, control, and the spreading of misinformation and hate, as we sometimes see today, they can equally be used to counter world- and self-alienation. These technologies hold the potential, for instance, to enhance education in the humanities, uphold the boundaries between science and technology and politics, and make democratic processes swifter, more direct, and more transparent, thereby reinforcing participatory democracy and fostering a more engaged and connected society.
{"title":"Artificial intelligence through the eyes of Hannah Arendt: fear, alienation, and empowerment","authors":"Colin Ashruf","doi":"10.1007/s00146-025-02435-2","DOIUrl":"10.1007/s00146-025-02435-2","url":null,"abstract":"<div><p>Hannah Arendt is known—among the many other contributions to political theory, ethics, and reflections on the human condition—for her analysis on the origins of pre-WWII totalitarianism, but her insights into the history of science and technology, particularly their impact on society and politics, also prove valuable to help put recent developments in artificial intelligence and social media into perspective. In this paper, I extrapolate Arendt’s framework to examine the potential threat artificial intelligence poses to humanity, drawing parallels between contemporary technological advances and those of Arendt’s era, such as nuclear weapons and space exploration. I argue that the fear of artificial intelligence ultimately reflects a deeper fear of humanity itself. I then explore Arendt’s analysis of how the history of science and technology has brought us to a point where our R&D efforts no longer seem to be focused on physical products but rather on intricate processes—in the case of artificial intelligence self-learning algorithms that rely on human input for proper functioning. The scientific method, which spurred the recent scientific revolution and, as a side effect, unleashed an impressive range of technological breakthroughs on society at an ever-accelerating pace, has, through increased consumerism and job automation, added to world-alienation and self-alienation, culminating, in turn, in a society of increasingly isolated individuals that are vulnerable to populism and authoritarianism. In line with Arendt, I contend, however, that negative outcomes are not inherent to scientific and technological advancement. While social media and artificial intelligence can be used for surveillance, control, and the spreading of misinformation and hate, as we sometimes see today, they can equally be used to counter world- and self-alienation. These technologies hold the potential, for instance, to enhance education in the humanities, uphold the boundaries between science and technology and politics, and make democratic processes swifter, more direct, and more transparent, thereby reinforcing participatory democracy and fostering a more engaged and connected society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"455 - 462"},"PeriodicalIF":4.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-29DOI: 10.1007/s00146-025-02416-5
Aqsa Farooq, Claes de Vreese
An ongoing surge of Artificial Intelligence (AI)-enabled false content has been spreading its way through the information ecosystem, including AI-generated images, which have been used as part of political disinformation campaigns. Thus, there remains a pressing need to understand which factors individuals rely upon when determining whether images are AI-generated, particularly when they can be used to spread disinformation. AI-generated images have been characterised by their aesthetic realism, which can be leveraged to deceive users, and those who use generative AI to create deceptive content also tend to exploit its ability to convey and elicit emotion. This experimental study explored how aesthetic realism and emotional salience, as key features of both AI-generated content and disinformation, may influence authenticity judgements of AI-generated disinformation images. In this study, 292 UK-based participants were presented with both AI-generated and non-AI-generated disinformation images which varied in aesthetic realism and emotional salience. Results showed that participants were more likely to judge realistic-looking AI-generated images as being authentic compared with less realistic-looking AI-generated images, but did so with less confidence in their decision. Emotional salience was not a significant predictor of judgements. When participants were presented with the correct verdict of an AI detection tool, their reliance on the tool to update their own judgements was predicted by the aesthetic realism of the image and their confidence levels. These findings may assist with the development of disinformation detection tools, as well as strategies that mitigate the spread of deceptive, synthesised visual content in the digital age.
{"title":"Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity","authors":"Aqsa Farooq, Claes de Vreese","doi":"10.1007/s00146-025-02416-5","DOIUrl":"10.1007/s00146-025-02416-5","url":null,"abstract":"<div><p>An ongoing surge of Artificial Intelligence (AI)-enabled false content has been spreading its way through the information ecosystem, including AI-generated images, which have been used as part of political disinformation campaigns. Thus, there remains a pressing need to understand which factors individuals rely upon when determining whether images are AI-generated, particularly when they can be used to spread disinformation. AI-generated images have been characterised by their aesthetic realism, which can be leveraged to deceive users, and those who use generative AI to create deceptive content also tend to exploit its ability to convey and elicit emotion. This experimental study explored how aesthetic realism and emotional salience, as key features of both AI-generated content and disinformation, may influence authenticity judgements of AI-generated disinformation images. In this study, 292 UK-based participants were presented with both AI-generated and non-AI-generated disinformation images which varied in aesthetic realism and emotional salience. Results showed that participants were more likely to judge realistic-looking AI-generated images as being authentic compared with less realistic-looking AI-generated images, but did so with less confidence in their decision. Emotional salience was not a significant predictor of judgements. When participants were presented with the correct verdict of an AI detection tool, their reliance on the tool to update their own judgements was predicted by the aesthetic realism of the image and their confidence levels. These findings may assist with the development of disinformation detection tools, as well as strategies that mitigate the spread of deceptive, synthesised visual content in the digital age.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"493 - 504"},"PeriodicalIF":4.7,"publicationDate":"2025-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02416-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-29DOI: 10.1007/s00146-025-02436-1
Iryna Bashynska
Artificial intelligence (AI) is increasingly applied to enable circular economy (CE) models by optimizing resource use, product design, waste management, and recycling. However, alongside potential environmental and economic benefits, the deployment of AI in circular systems raises significant ethical concerns that can influence real-world adoption of CE principles. This review critically examines key ethical issues at the intersection of AI and the CE, drawing on recent literature, case studies, and policy frameworks. We identify and discuss themes including algorithmic transparency and explainability, data privacy and bias, impacts on labor and employment, social inclusion and fairness, responsible AI deployment, and the role of human-in-the-loop oversight. We synthesize insights from academic studies, industry examples, and governance initiatives (e.g. the EU AI Act and OECD AI Principles) to illuminate how these ethical challenges affect the implementation of circular economy practices. Our analysis finds that issues like opaque algorithms, biased data, workforce displacement, and unequal access can undermine trust and equity in AI-driven circular solutions, thereby impeding their societal acceptance. Conversely, emerging principles of responsible AI—emphasizing transparency, accountability, fairness, and human oversight—offer pathways to mitigate risks and foster more inclusive, trustworthy circular economy transitions. The review concludes with recommendations for policymakers, organizations, and practitioners on aligning AI ethics with circular economy goals, highlighting the need for interdisciplinary collaboration to ensure that AI contributes to a sustainable and just circular future.
{"title":"Ethical aspects of AI use in the circular economy","authors":"Iryna Bashynska","doi":"10.1007/s00146-025-02436-1","DOIUrl":"10.1007/s00146-025-02436-1","url":null,"abstract":"<div><p>Artificial intelligence (AI) is increasingly applied to enable circular economy (CE) models by optimizing resource use, product design, waste management, and recycling. However, alongside potential environmental and economic benefits, the deployment of AI in circular systems raises significant ethical concerns that can influence real-world adoption of CE principles. This review critically examines key ethical issues at the intersection of AI and the CE, drawing on recent literature, case studies, and policy frameworks. We identify and discuss themes including algorithmic transparency and explainability, data privacy and bias, impacts on labor and employment, social inclusion and fairness, responsible AI deployment, and the role of human-in-the-loop oversight. We synthesize insights from academic studies, industry examples, and governance initiatives (e.g. the EU AI Act and OECD AI Principles) to illuminate how these ethical challenges affect the implementation of circular economy practices. Our analysis finds that issues like opaque algorithms, biased data, workforce displacement, and unequal access can undermine trust and equity in AI-driven circular solutions, thereby impeding their societal acceptance. Conversely, emerging principles of responsible AI—emphasizing transparency, accountability, fairness, and human oversight—offer pathways to mitigate risks and foster more inclusive, trustworthy circular economy transitions. The review concludes with recommendations for policymakers, organizations, and practitioners on aligning AI ethics with circular economy goals, highlighting the need for interdisciplinary collaboration to ensure that AI contributes to a sustainable and just circular future.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"575 - 593"},"PeriodicalIF":4.7,"publicationDate":"2025-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02436-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-28DOI: 10.1007/s00146-025-02400-z
Pablo González de la Torre, Marta Pérez-Verdugo, Xabier E. Barandiaran
This paper critically analyses the “attention economy” within the framework of cognitive science and techno-political economics, as applied to both human and machine interactions. We explore how current business models, particularly in digital platform capitalism, harness user engagement by strategically shaping attentional patterns. These platforms utilize advanced AI and massive data analytics to enhance user engagement, creating a cycle of attention capture and data extraction. We review contemporary (neuro)cognitive theories of attention and platform engagement design techniques and criticize classical cognitivist and behaviourist theories for their inadequacies in addressing the potential harms of such engagement on user autonomy and wellbeing. 4E approaches to cognitive science, instead, emphasizing the embodied, extended, enactive, and ecological aspects of cognition, offer us an intrinsic normative standpoint and a more integrated understanding of how attentional patterns are actively constituted by adaptive digital environments. By examining the precarious nature of habit formation in digital contexts, we reveal the techno-economic underpinnings that threaten personal autonomy by disaggregating habits away from the individual, into an AI managed collection of behavioural patterns. Our current predicament suggests the necessity of a paradigm shift towards an ecology of attention. This shift aims to foster environments that respect and preserve human cognitive and social capacities, countering the exploitative tendencies of cognitive capitalism.
{"title":"Attention is all they need: cognitive science and the (techno)political economy of attention in humans and machines","authors":"Pablo González de la Torre, Marta Pérez-Verdugo, Xabier E. Barandiaran","doi":"10.1007/s00146-025-02400-z","DOIUrl":"10.1007/s00146-025-02400-z","url":null,"abstract":"<div><p>This paper critically analyses the “attention economy” within the framework of cognitive science and techno-political economics, as applied to both human and machine interactions. We explore how current business models, particularly in digital platform capitalism, harness user engagement by strategically shaping attentional patterns. These platforms utilize advanced AI and massive data analytics to enhance user engagement, creating a cycle of attention capture and data extraction. We review contemporary (neuro)cognitive theories of attention and platform engagement design techniques and criticize classical cognitivist and behaviourist theories for their inadequacies in addressing the potential harms of such engagement on user autonomy and wellbeing. 4E approaches to cognitive science, instead, emphasizing the embodied, extended, enactive, and ecological aspects of cognition, offer us an intrinsic normative standpoint and a more integrated understanding of how attentional patterns are actively constituted by adaptive digital environments. By examining the precarious nature of habit formation in digital contexts, we reveal the techno-economic underpinnings that threaten personal autonomy by disaggregating habits away from the individual, into an AI managed collection of behavioural patterns. Our current predicament suggests the necessity of a paradigm shift towards an ecology of attention. This shift aims to foster environments that respect and preserve human cognitive and social capacities, countering the exploitative tendencies of cognitive capitalism.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"5 - 21"},"PeriodicalIF":4.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02400-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-28DOI: 10.1007/s00146-025-02426-3
Angjelin Hila
<div><p>In this paper, we interrogate the epistemological implications of human–LLM interaction with a specific focus on epistemological threats. We develop a theory of epistemic justification that synthesizes internalist and externalist conceptions of epistemic warrant termed collective epistemology. Collective epistemology considers the way epistemological warrant is distributed across human collectives. In pursuing this line of thinking, we take bounded rationality and dual-process theory as background assumptions in our analysis of collective epistemology as a mechanism of collective rationality. Following this approach, we distinguish between internalist justification as a robust standard of rationality and externalist justification as a reliable knowledge transmission mechanism. We argue that while these standards jointly constitute necessary and sufficient conditions for collective rationality, only internalist justification produces knowledge. We posit that reflective knowledge entails three necessary and sufficient conditions: a) rational agents reflectively understand the basis on which a proposition is evaluated as true b) in absence of a reflective evaluative basis for a proposition, rational agents consistently evaluate the reliability of truth sources, and c) rational agents have an epistemic duty to apply a) and b) as rational standards in their domains of competence. Since distributed rationality is socially scaffolded, we pursue the consequences of unchecked human–LLM interaction on social epistemic chains of dependence. We argue that LLMs approximate a type of externalist justification termed reliabilism but do not instantiate internalist standards of justification. Specifically, we argue that LLMs do not possess reflective justification for the information they produce but rather reliably transmit information whose reflective basis has been established in advance. Since LLMs cannot produce knowledge with reflective justifiedness but only reliabilist justifiedness, we argue that human outsourcing of reflective knowledge to reliable LLM information threatens to erode reflective standards of justification at scale. As a result, LLM information reliability disincentivizes comprehension and understanding in human agents. Human agents that forfeit comprehension and understanding for reliably correct results reduce the net justifiedness of their own beliefs and, consequently, reduce their ability to perform their epistemic duties professionally and civically. The scaled outsourcing of reflective knowledge to LLMs across collectives threatens the impoverishment of the production of reflective knowledge. To mitigate these potential threats, we propose developing epistemic norms across three tiers of social organization: a) normative epistemic model for individual human–LLM interaction, b) norm setting through institutional and organizational frameworks and c) the imposition of deontic constraints at organizational and/or legislative lev
{"title":"The epistemological consequences of large language models: rethinking collective intelligence and institutional knowledge","authors":"Angjelin Hila","doi":"10.1007/s00146-025-02426-3","DOIUrl":"10.1007/s00146-025-02426-3","url":null,"abstract":"<div><p>In this paper, we interrogate the epistemological implications of human–LLM interaction with a specific focus on epistemological threats. We develop a theory of epistemic justification that synthesizes internalist and externalist conceptions of epistemic warrant termed collective epistemology. Collective epistemology considers the way epistemological warrant is distributed across human collectives. In pursuing this line of thinking, we take bounded rationality and dual-process theory as background assumptions in our analysis of collective epistemology as a mechanism of collective rationality. Following this approach, we distinguish between internalist justification as a robust standard of rationality and externalist justification as a reliable knowledge transmission mechanism. We argue that while these standards jointly constitute necessary and sufficient conditions for collective rationality, only internalist justification produces knowledge. We posit that reflective knowledge entails three necessary and sufficient conditions: a) rational agents reflectively understand the basis on which a proposition is evaluated as true b) in absence of a reflective evaluative basis for a proposition, rational agents consistently evaluate the reliability of truth sources, and c) rational agents have an epistemic duty to apply a) and b) as rational standards in their domains of competence. Since distributed rationality is socially scaffolded, we pursue the consequences of unchecked human–LLM interaction on social epistemic chains of dependence. We argue that LLMs approximate a type of externalist justification termed reliabilism but do not instantiate internalist standards of justification. Specifically, we argue that LLMs do not possess reflective justification for the information they produce but rather reliably transmit information whose reflective basis has been established in advance. Since LLMs cannot produce knowledge with reflective justifiedness but only reliabilist justifiedness, we argue that human outsourcing of reflective knowledge to reliable LLM information threatens to erode reflective standards of justification at scale. As a result, LLM information reliability disincentivizes comprehension and understanding in human agents. Human agents that forfeit comprehension and understanding for reliably correct results reduce the net justifiedness of their own beliefs and, consequently, reduce their ability to perform their epistemic duties professionally and civically. The scaled outsourcing of reflective knowledge to LLMs across collectives threatens the impoverishment of the production of reflective knowledge. To mitigate these potential threats, we propose developing epistemic norms across three tiers of social organization: a) normative epistemic model for individual human–LLM interaction, b) norm setting through institutional and organizational frameworks and c) the imposition of deontic constraints at organizational and/or legislative lev","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"79 - 97"},"PeriodicalIF":4.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-28DOI: 10.1007/s00146-025-02405-8
Gunter Bombaerts, Tom Hannes, Martin Adam, Alessandra Aloisi, Joel Anderson, P. Sven Arvidson, Lawrence Berger, Stefano Davide Bettera, Enrico Campo, Laura Candiotto, Silvia Caprioglio Panizza, Anna Ciaunica, Yves Citton, Diego D´Angelo, Matthew J. Dennis, Natalie Depraz, Peter Doran, Wolfgang Drechsler, William Edelglass, Iris Eisenberger, Mark Fortney, Beverley Foulks McGuire, Antony Fredriksson, Peter D. Hershock, Soraj Hongladarom, Wijnand IJsselsteijn, Beth Jacobs, Gabor Karsai, Steven Laureys, Thomas Taro Lennerfors, Jeanne Lim, Chien-Te Lin, William Lamson, Mark Losoncz, David Loy, Lavinia Marin, Bence Peter Marosan, Chiara Mascarello, David L. McMahan, Jin Y. Park, Nina Petek, Anna Puzio, Katrien Schaubroeck, Shobhit Shakya, Juewei Shi, Elizaveta Solomonova, Francesco Tormen, Jitendra Uttam, Marieke van Vugt, Sebastjan Vörös, Maren Wehrle, Galit Wellner, Jason M. Wirth, Olaf Witkowski, Apiradee Wongkitrungrueng, Dale S. Wright, Hin Sing Yuen, Yutong Zheng
We endorse policymakers’ efforts to address the negative consequences of the attention economy’s technology but add that these approaches are often limited in their criticism of the systemic context of human attention. Starting from Buddhist philosophy, we advocate a broader approach: an ‘ecology of attending’ that centers on conceptualizing, designing, and using attention (1) in an embedded way and (2) focused on the alleviating of suffering. With ‘embedded’ we mean that attention is not a neutral, isolated mechanism but a meaning-engendering part of an ‘ecology’ of bodily, sociotechnical and moral frameworks. With ‘focused on the alleviation of suffering’ we mean that we explicitly move away from the (often implicit) conception of attention as a tool for gratifying desires. We analyze existing inquiries in these directions and urge them to be intensified and integrated. As to the design and function of our technological environment, we propose three questions for further research: How can technology help to acknowledge us as ‘ecological’ beings, rather than as self-sufficient individuals? How can technology help to raise awareness of our moral framework? And how can technology increase the conditions for ‘attending’ to the alleviation of suffering, by substituting our covert self-driven moral framework with an ecologically attending one? We believe in the urgency of transforming the inhumane attention economy sociotechnical system into a humane ecology of attending, and in our ability to contribute to it.
{"title":"Beyond the attention economy, towards an ecology of attending. A manifesto","authors":"Gunter Bombaerts, Tom Hannes, Martin Adam, Alessandra Aloisi, Joel Anderson, P. Sven Arvidson, Lawrence Berger, Stefano Davide Bettera, Enrico Campo, Laura Candiotto, Silvia Caprioglio Panizza, Anna Ciaunica, Yves Citton, Diego D´Angelo, Matthew J. Dennis, Natalie Depraz, Peter Doran, Wolfgang Drechsler, William Edelglass, Iris Eisenberger, Mark Fortney, Beverley Foulks McGuire, Antony Fredriksson, Peter D. Hershock, Soraj Hongladarom, Wijnand IJsselsteijn, Beth Jacobs, Gabor Karsai, Steven Laureys, Thomas Taro Lennerfors, Jeanne Lim, Chien-Te Lin, William Lamson, Mark Losoncz, David Loy, Lavinia Marin, Bence Peter Marosan, Chiara Mascarello, David L. McMahan, Jin Y. Park, Nina Petek, Anna Puzio, Katrien Schaubroeck, Shobhit Shakya, Juewei Shi, Elizaveta Solomonova, Francesco Tormen, Jitendra Uttam, Marieke van Vugt, Sebastjan Vörös, Maren Wehrle, Galit Wellner, Jason M. Wirth, Olaf Witkowski, Apiradee Wongkitrungrueng, Dale S. Wright, Hin Sing Yuen, Yutong Zheng","doi":"10.1007/s00146-025-02405-8","DOIUrl":"10.1007/s00146-025-02405-8","url":null,"abstract":"<div><p>We endorse policymakers’ efforts to address the negative consequences of the attention economy’s technology but add that these approaches are often limited in their criticism of the systemic context of human attention. Starting from Buddhist philosophy, we advocate a broader approach: an ‘ecology of attending’ that centers on conceptualizing, designing, and using attention (1) in an embedded way and (2) focused on the alleviating of suffering. With ‘embedded’ we mean that attention is not a neutral, isolated mechanism but a meaning-engendering part of an ‘ecology’ of bodily, sociotechnical and moral frameworks. With ‘focused on the alleviation of suffering’ we mean that we explicitly move away from the (often implicit) conception of attention as a tool for gratifying desires. We analyze existing inquiries in these directions and urge them to be intensified and integrated. As to the design and function of our technological environment, we propose three questions for further research: How can technology help to acknowledge us as ‘ecological’ beings, rather than as self-sufficient individuals? How can technology help to raise awareness of our moral framework? And how can technology increase the conditions for ‘attending’ to the alleviation of suffering, by substituting our covert self-driven moral framework with an ecologically attending one? We believe in the urgency of transforming the inhumane attention economy sociotechnical system into a humane ecology of attending, and in our ability to contribute to it.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"477 - 492"},"PeriodicalIF":4.7,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02405-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}