Pub Date : 2025-06-09DOI: 10.1007/s00146-025-02409-4
Mihye An
This theoretical article moves beyond representationalist conceptions of objectivity to examine deeper challenges posed by LLMs in collective knowledge production. While LLMs are often criticized for bias, hallucination, and generating “bullshit” that misrepresents reality, such critiques are too narrow to account for how LLMs transform the sociotechnical practices of knowledge-making. Drawing on Barad’s performative account, we argue that objectivity should be understood not as fixed representations of the world but as ongoing ethical and epistemological boundaries emerging through complex intra-acting agencies. We offer a relational analysis of LLM production, framing it as a series of transformations between technical artifacts: from Internet to dataset, dataset to base model, and base model to instruction-tuned model. Each transformation introduces exclusions that enact epistemological, computational, and discursive boundaries. We conclude by proposing “artifactual literacy,” a critical awareness of how LLMs function as contingent artifacts mediating the evolving boundaries of objective knowledge.
{"title":"Boundary-making practices: LLMs and an artifactual production of objectivity","authors":"Mihye An","doi":"10.1007/s00146-025-02409-4","DOIUrl":"10.1007/s00146-025-02409-4","url":null,"abstract":"<div><p>This theoretical article moves beyond representationalist conceptions of objectivity to examine deeper challenges posed by LLMs in collective knowledge production. While LLMs are often criticized for bias, hallucination, and generating “bullshit” that misrepresents reality, such critiques are too narrow to account for how LLMs transform the sociotechnical practices of knowledge-making. Drawing on Barad’s performative account, we argue that objectivity should be understood not as fixed representations of the world but as ongoing ethical and epistemological boundaries emerging through complex intra-acting agencies. We offer a relational analysis of LLM production, framing it as a series of transformations between technical artifacts: from Internet to dataset, dataset to base model, and base model to instruction-tuned model. Each transformation introduces exclusions that enact epistemological, computational, and discursive boundaries. We conclude by proposing “artifactual literacy,” a critical awareness of how LLMs function as contingent artifacts mediating the evolving boundaries of objective knowledge.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5967 - 5979"},"PeriodicalIF":4.7,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-07DOI: 10.1007/s00146-025-02397-5
Joshua Krook
AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, critical thinking or even creativity. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even creativity in an AGI world. The biggest threat to humanity is, therefore, not that machines will become more like humans, but that humans will become more like machines.
{"title":"When autonomy breaks: the hidden existential risk of AI","authors":"Joshua Krook","doi":"10.1007/s00146-025-02397-5","DOIUrl":"10.1007/s00146-025-02397-5","url":null,"abstract":"<div><p>AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, critical thinking or even creativity. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even creativity in an AGI world. The biggest threat to humanity is, therefore, not that machines will become more like humans, but that humans will become more like machines.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6011 - 6024"},"PeriodicalIF":4.7,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-06DOI: 10.1007/s00146-025-02407-6
Tomasz Hollanek, Dorian Peters, Eleanor Drage, Raphael Hernandes
This study explores the perspectives of media professionals on the concerns, needs, and responsibilities related to fostering AI literacy among journalists. We report on findings from two workshops with journalists (based in the USA, the UK, China, and India), as well as representatives of civil society organizations and academic specialists in media and AI literacy. Through a reflexive qualitative analysis of data collected during the workshops, we examine the obstacles to AI literacy development among journalists and the quality of resources currently available to them for learning about AI and AI ethics. We highlight the most pressing needs in AI-focused education for journalists and surface participants’ ideas for potential solutions, including an authoritative online compendium on AI and journalism and a database of diverse expert voices. We point to the areas where relevant stakeholders should direct their efforts to support journalists in navigating AI responsibly and critically.
{"title":"AI, journalism, and critical AI literacy: exploring journalists’ perspectives on AI and responsible reporting","authors":"Tomasz Hollanek, Dorian Peters, Eleanor Drage, Raphael Hernandes","doi":"10.1007/s00146-025-02407-6","DOIUrl":"10.1007/s00146-025-02407-6","url":null,"abstract":"<div><p>This study explores the perspectives of media professionals on the concerns, needs, and responsibilities related to fostering AI literacy among journalists. We report on findings from two workshops with journalists (based in the USA, the UK, China, and India), as well as representatives of civil society organizations and academic specialists in media and AI literacy. Through a reflexive qualitative analysis of data collected during the workshops, we examine the obstacles to AI literacy development among journalists and the quality of resources currently available to them for learning about AI and AI ethics. We highlight the most pressing needs in AI-focused education for journalists and surface participants’ ideas for potential solutions, including an authoritative online compendium on AI and journalism and a database of diverse expert voices. We point to the areas where relevant stakeholders should direct their efforts to support journalists in navigating AI responsibly and critically. </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6393 - 6405"},"PeriodicalIF":4.7,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02407-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-04DOI: 10.1007/s00146-025-02396-6
Brenda O’Neill, Larry Stapleton, Peter Carew
The purpose of this research is to support and nurture tacit knowledge whilst simultaneously leading to the development of machine-based intelligent systems which incorporate machine readable knowledge for the benefit of society. This paper starts with an introduction to the persistent power struggle between human and technology and shines a light on Professor Michael Cooley’s involvement with the Lucas Plan in the 1970s and his PhD work which focused on the transition from manual draftsmanship to Computer Aided Design in engineering. A research lab is identified as a ‘complex adaptive system’ and forms the basis of a longitudinal case study on the Human Centered bottom-up approach to digitisation of cultural heritage. Components required to support and nurture the growth of a Participation Action Research lab are identified. The novel ‘ENRICHER’ method embodies human centeredness and is operationalized, tested, evaluated and findings discussed. Examples of emergence are also discussed. A metric of the ENRICHER method initially identified where the lab did not fully meet all the methods 8 points. Subsequent actions adjusted the holonic lens focus to metadata and the ongoing work on the creation of a cataloging tool for the librarians. The use of XML technologies integrates the work into a larger model of intelligence. It positions the work on the semantic web technology stack and opens up the pathway to ontology generation and development and management of large language models. The ENRICHER method is a way of developing human–machine symbiotics that also incorporate AI e.g. transcription, metadata generation.
{"title":"Human centered systems start with social dynamics and arrive at ontology","authors":"Brenda O’Neill, Larry Stapleton, Peter Carew","doi":"10.1007/s00146-025-02396-6","DOIUrl":"10.1007/s00146-025-02396-6","url":null,"abstract":"<div><p>The purpose of this research is to support and nurture tacit knowledge whilst simultaneously leading to the development of machine-based intelligent systems which incorporate machine readable knowledge for the benefit of society. This paper starts with an introduction to the persistent power struggle between human and technology and shines a light on Professor Michael Cooley’s involvement with the Lucas Plan in the 1970s and his PhD work which focused on the transition from manual draftsmanship to Computer Aided Design in engineering. A research lab is identified as a ‘complex adaptive system’ and forms the basis of a longitudinal case study on the Human Centered bottom-up approach to digitisation of cultural heritage. Components required to support and nurture the growth of a Participation Action Research lab are identified. The novel ‘ENRICHER’ method embodies human centeredness and is operationalized, tested, evaluated and findings discussed. Examples of emergence are also discussed. A metric of the ENRICHER method initially identified where the lab did not fully meet all the methods 8 points. Subsequent actions adjusted the holonic lens focus to metadata and the ongoing work on the creation of a cataloging tool for the librarians. The use of XML technologies integrates the work into a larger model of intelligence. It positions the work on the semantic web technology stack and opens up the pathway to ontology generation and development and management of large language models. The ENRICHER method is a way of developing human–machine symbiotics that also incorporate AI e.g. transcription, metadata generation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5981 - 5998"},"PeriodicalIF":4.7,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02396-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-30DOI: 10.1007/s00146-025-02353-3
Ingrid Campo-Ruiz
<div><p>Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should
{"title":"Spaces for democracy with generative artificial intelligence: public architecture at stake","authors":"Ingrid Campo-Ruiz","doi":"10.1007/s00146-025-02353-3","DOIUrl":"10.1007/s00146-025-02353-3","url":null,"abstract":"<div><p>Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5951 - 5966"},"PeriodicalIF":4.7,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02353-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-26DOI: 10.1007/s00146-025-02390-y
Jo-Yu Kuo, Tzu-Hsuan Wang
The integration of AI technology into design practices has sparked debate within the design community, particularly regarding its behavioral and process-oriented impacts. While existing studies predominantly rely on qualitative methods such as interviews and observations, these approaches may fall short in uncovering the intricate, cross-disciplinary relationships essential for a holistic understanding of AI’s societal implications. This study introduces an acceptance model tailored to designers, based on the Unified Theory of Acceptance and Use of Technology (UTAUT). The proposed model emphasizes the increasing role of online cooperation and affective drivers, including personal innovativeness and anxiety toward AI-integrated design tools. By analyzing 292 valid responses through structural equation modeling, we found that social influence and facilitating conditions are strongly correlated with positive attitudes toward cooperation, while performance expectancy emerged as the key driver for AI adoption in design. Notably, experienced professionals reported greater access to support and resources for AI integration. Although AI-induced anxiety affects certain aspects of technology adoption, it does not significantly diminish performance expectancy. In addition, the study discusses gender differences in technology acceptance and the influence of underlying geographic factors. These insights contribute to the broader discourse on the societal implications of AI, offering practical guidance for the development of AI-integrated design programs in educational and professional contexts.
{"title":"The roles of cooperative attitude, personal innovativeness, and anxiety in AI adoption within the design community","authors":"Jo-Yu Kuo, Tzu-Hsuan Wang","doi":"10.1007/s00146-025-02390-y","DOIUrl":"10.1007/s00146-025-02390-y","url":null,"abstract":"<div><p>The integration of AI technology into design practices has sparked debate within the design community, particularly regarding its behavioral and process-oriented impacts. While existing studies predominantly rely on qualitative methods such as interviews and observations, these approaches may fall short in uncovering the intricate, cross-disciplinary relationships essential for a holistic understanding of AI’s societal implications. This study introduces an acceptance model tailored to designers, based on the Unified Theory of Acceptance and Use of Technology (UTAUT). The proposed model emphasizes the increasing role of online cooperation and affective drivers, including personal innovativeness and anxiety toward AI-integrated design tools. By analyzing 292 valid responses through structural equation modeling, we found that social influence and facilitating conditions are strongly correlated with positive attitudes toward cooperation, while performance expectancy emerged as the key driver for AI adoption in design. Notably, experienced professionals reported greater access to support and resources for AI integration. Although AI-induced anxiety affects certain aspects of technology adoption, it does not significantly diminish performance expectancy. In addition, the study discusses gender differences in technology acceptance and the influence of underlying geographic factors. These insights contribute to the broader discourse on the societal implications of AI, offering practical guidance for the development of AI-integrated design programs in educational and professional contexts.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6339 - 6355"},"PeriodicalIF":4.7,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-25DOI: 10.1007/s00146-025-02398-4
Jonathan Adams
As artificial intelligence (AI) technologies become increasingly embedded in high-stakes fields such as healthcare, ethical and epistemic considerations raise the need for evaluative frameworks to assess their societal impacts across multiple dimensions. This paper uses the ethical-epistemic matrix (EEM), a structured framework that integrates both ethical and epistemic principles, to evaluate medical AI applications more comprehensively. Building on the ethical principles of well-being, autonomy, justice, and explicability, the matrix introduces epistemic principles—accuracy, consistency, relevance, and instrumental efficacy—that assess AI’s role in knowledge production. This dual approach enables a nuanced assessment that reflects the diverse perspectives of stakeholders within the medical field—patients, clinicians, developers, the public, and health policy-makers—who assess AI systems differently based on distinct interests and epistemic goals. Although the EEM has been outlined conceptually before, no published research paper has yet used it explore the ethical and epistemic implications arising in its key intended application domain of AI in medicine. Through a systematic demonstration of the EEM as applied to medical AI, this paper argues that it encourages a broader understanding of AI’s implications and serves as a valuable methodological tool for evaluating future uses. This is illustrated with the case study of AI systems in sleep apnea detection, where the EEM highlights the ethical trade-offs and epistemic challenges that different stakeholders may perceive, which can be made more concrete if the tool is embedded in future technical projects.
{"title":"Ethical and epistemic implications of artificial intelligence in medicine: a stakeholder-based assessment","authors":"Jonathan Adams","doi":"10.1007/s00146-025-02398-4","DOIUrl":"10.1007/s00146-025-02398-4","url":null,"abstract":"<div><p>As artificial intelligence (AI) technologies become increasingly embedded in high-stakes fields such as healthcare, ethical and epistemic considerations raise the need for evaluative frameworks to assess their societal impacts across multiple dimensions. This paper uses the ethical-epistemic matrix (EEM), a structured framework that integrates both ethical and epistemic principles, to evaluate medical AI applications more comprehensively. Building on the ethical principles of well-being, autonomy, justice, and explicability, the matrix introduces epistemic principles—accuracy, consistency, relevance, and instrumental efficacy—that assess AI’s role in knowledge production. This dual approach enables a nuanced assessment that reflects the diverse perspectives of stakeholders within the medical field—patients, clinicians, developers, the public, and health policy-makers—who assess AI systems differently based on distinct interests and epistemic goals. Although the EEM has been outlined conceptually before, no published research paper has yet used it explore the ethical and epistemic implications arising in its key intended application domain of AI in medicine. Through a systematic demonstration of the EEM as applied to medical AI, this paper argues that it encourages a broader understanding of AI’s implications and serves as a valuable methodological tool for evaluating future uses. This is illustrated with the case study of AI systems in sleep apnea detection, where the EEM highlights the ethical trade-offs and epistemic challenges that different stakeholders may perceive, which can be made more concrete if the tool is embedded in future technical projects.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5935 - 5950"},"PeriodicalIF":4.7,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02398-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-24DOI: 10.1007/s00146-025-02388-6
Tricia Griffin, Brian P. Green, Jos V. M. Welie
Artificial intelligence developers, machine learning engineers, and data scientists occupy a contradictory role in the modern marketplace. While they are central to the business and science of AI, they are marginalized as moral agents. Consequently, the marketplace has cultivated environments in which developers can be unthinking in their own roles and responsibilities, while at the same time tasking them with creating “thinking machines.” The central aim of this article is to show that this state of affairs is morally unjustifiable. To accomplish this, we draw from Arthur Isak Applbaum’s work on adversary roles and Alasdair MacIntyre’s framework for professional moral agency to establish the context dependencies for a “good” AI developer. We then draw from available studies that have engaged developers in questions about their moral agency and place them in conversation with Dennis Thompson and Helen Nissenbaum about the excuses associated with “the problem of many hands,” a concept that has beguiled accountability in the AI community for decades. We then return to MacIntyre’s framework to provide evidence from the same set of studies that AI developers do understand themselves as being responsible for more than just the role, yet they lack a robust community to whom they can submit their choices for ethical scrutiny and work environments that are often non-conducive to their moral actualization. We conclude with specific recommendations for bringing developers’ moral agency more fully into the discourse about AI ethics.
{"title":"Excuses, excuses: moral agency and the professional identity of AI developers","authors":"Tricia Griffin, Brian P. Green, Jos V. M. Welie","doi":"10.1007/s00146-025-02388-6","DOIUrl":"10.1007/s00146-025-02388-6","url":null,"abstract":"<div><p>Artificial intelligence developers, machine learning engineers, and data scientists occupy a contradictory role in the modern marketplace. While they are central to the business and science of AI, they are marginalized as moral agents. Consequently, the marketplace has cultivated environments in which developers can be unthinking in their own roles and responsibilities, while at the same time tasking them with creating “thinking machines.” The central aim of this article is to show that this state of affairs is morally unjustifiable. To accomplish this, we draw from Arthur Isak Applbaum’s work on adversary roles and Alasdair MacIntyre’s framework for professional moral agency to establish the context dependencies for a “good” AI developer. We then draw from available studies that have engaged developers in questions about their moral agency and place them in conversation with Dennis Thompson and Helen Nissenbaum about the excuses associated with “the problem of many hands,” a concept that has beguiled accountability in the AI community for decades. We then return to MacIntyre’s framework to provide evidence from the same set of studies that AI developers do understand themselves as being responsible for more than just the role, yet they lack a robust community to whom they can submit their choices for ethical scrutiny and work environments that are often non-conducive to their moral actualization. We conclude with specific recommendations for bringing developers’ moral agency more fully into the discourse about AI ethics.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6327 - 6338"},"PeriodicalIF":4.7,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02388-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-20DOI: 10.1007/s00146-025-02351-5
Javier Conde, Miguel Gonzalez, Gonzalo Martínez, Fernando Moral, Elena Merino-Gomez, Pedro Reviriego
The rapid adoption of generative artificial intelligence (AI) is accelerating content creation and modification. For example, variations of a given content, be it text or images, can be created almost instantly and at a low cost. This will soon lead to the majority of text and images being created directly by AI models or by humans assisted by AI. This poses new risks; for example, AI-generated content may be used to train newer AI models and degrade their performance, or information may be lost in the transformations made by AI which could occur when the same content is processed over and over again by AI tools. An example of AI image modifications is inpainting in which an AI model completes missing fragments of an image. The incorporation of inpainting tools into photo editing programs promotes their adoption and encourages their recursive use to modify images. Inpainting can be applied recursively, starting from an image, removing some parts, applying inpainting to reconstruct the image, revising it, and then starting the inpainting process again on the reconstructed image, etc. This paper presents an empirical evaluation of recursive inpainting when using one of the most widely used image models: Stable Diffusion. The inpainting process is applied by randomly selecting a fragment of the image, reconstructing it, selecting another fragment, and repeating the process a predefined number of iterations. The images used in the experiments are taken from a publicly available art data set and correspond to different styles and historical periods. Additionally, photographs are also evaluated as a reference. The modified images are compared with the original ones by both using quantitative metrics and performing a qualitative analysis. The results show that recursive inpainting in some cases modifies the image so that it still resembles the original one while in others leads to image degeneration, so ending with a non-meaningful image. The outcome of the recursive inpainting process depends on several factors, such as the type of image, the size of the inpainting masks, and the number of iterations. The results of our evaluation illustrate how information can be lost due to successive AI transformations. The evaluation of additional models, images, and inpainting sequences is needed to confirm whether this observation is generally applicable or if it occurs only in some models and settings.
{"title":"Recursive InPainting (RIP): how much information is lost under recursive inferences?","authors":"Javier Conde, Miguel Gonzalez, Gonzalo Martínez, Fernando Moral, Elena Merino-Gomez, Pedro Reviriego","doi":"10.1007/s00146-025-02351-5","DOIUrl":"10.1007/s00146-025-02351-5","url":null,"abstract":"<div><p>The rapid adoption of generative artificial intelligence (AI) is accelerating content creation and modification. For example, variations of a given content, be it text or images, can be created almost instantly and at a low cost. This will soon lead to the majority of text and images being created directly by AI models or by humans assisted by AI. This poses new risks; for example, AI-generated content may be used to train newer AI models and degrade their performance, or information may be lost in the transformations made by AI which could occur when the same content is processed over and over again by AI tools. An example of AI image modifications is inpainting in which an AI model completes missing fragments of an image. The incorporation of inpainting tools into photo editing programs promotes their adoption and encourages their recursive use to modify images. Inpainting can be applied recursively, starting from an image, removing some parts, applying inpainting to reconstruct the image, revising it, and then starting the inpainting process again on the reconstructed image, etc. This paper presents an empirical evaluation of recursive inpainting when using one of the most widely used image models: Stable Diffusion. The inpainting process is applied by randomly selecting a fragment of the image, reconstructing it, selecting another fragment, and repeating the process a predefined number of iterations. The images used in the experiments are taken from a publicly available art data set and correspond to different styles and historical periods. Additionally, photographs are also evaluated as a reference. The modified images are compared with the original ones by both using quantitative metrics and performing a qualitative analysis. The results show that recursive inpainting in some cases modifies the image so that it still resembles the original one while in others leads to image degeneration, so ending with a non-meaningful image. The outcome of the recursive inpainting process depends on several factors, such as the type of image, the size of the inpainting masks, and the number of iterations. The results of our evaluation illustrate how information can be lost due to successive AI transformations. The evaluation of additional models, images, and inpainting sequences is needed to confirm whether this observation is generally applicable or if it occurs only in some models and settings.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6309 - 6325"},"PeriodicalIF":4.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02351-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-17DOI: 10.1007/s00146-025-02376-w
Melanie Wilmink
This text interrogates the theory and practice of AI as an artistic material through close analysis of an exhibition themed around human health and disability. Highlighting the project Art Beyond Humanity: AI x Human Collaborations (2023), this analysis explores three key questions regarding the (1) social and ethical impact of GenAI tools on artistic labor, (2) the relational process of co-creation between humans and algorithms, and (3) the aesthetic potential of AI as a creative medium. Led by curator Melanie Wilmink, mathematician Eric Dolores Cuenca, and artist/health advocate Justus Harris, alongside students from Yonsei and Woosong Universities in South Korea, the project used research-creation methodologies to provoke ethical questions about copyright, access to corporately controlled systems, and artistic tactics for production. In this paper, concerns about GenAI impacts on artistic labor are contextualized by art historical precedent and discussion about the limitations of algorithmic creativity. Subsequent sections articulate GenAI images as an assemblage of human and machine perception that embed bias but also hold the potential to draw attention to social issues, while outlining the techniques that artists can use to manipulate GenAI outputs. This can occur through code and database training, but also through the shaping of text prompts since GenAI images are imbricated with language. By exploring how GenAI produces images—as both material and conceptual—artists have the power to generate critical discourse about the social and the ethical impacts of these new technologies in the world.
{"title":"Art Beyond Humanity: exploring the human through machine creation","authors":"Melanie Wilmink","doi":"10.1007/s00146-025-02376-w","DOIUrl":"10.1007/s00146-025-02376-w","url":null,"abstract":"<div><p>This text interrogates the theory and practice of AI as an artistic material through close analysis of an exhibition themed around human health and disability. Highlighting the project <i>Art Beyond Humanity: AI x Human Collaborations</i> (2023), this analysis explores three key questions regarding the (1) social and ethical impact of GenAI tools on artistic labor, (2) the relational process of co-creation between humans and algorithms, and (3) the aesthetic potential of AI as a creative medium. Led by curator Melanie Wilmink, mathematician Eric Dolores Cuenca, and artist/health advocate Justus Harris, alongside students from Yonsei and Woosong Universities in South Korea, the project used research-creation methodologies to provoke ethical questions about copyright, access to corporately controlled systems, and artistic tactics for production. In this paper, concerns about GenAI impacts on artistic labor are contextualized by art historical precedent and discussion about the limitations of algorithmic creativity. Subsequent sections articulate GenAI images as an assemblage of human and machine perception that embed bias but also hold the potential to draw attention to social issues, while outlining the techniques that artists can use to manipulate GenAI outputs. This can occur through code and database training, but also through the shaping of text prompts since GenAI images are imbricated with language. By exploring how GenAI produces images—as both material and conceptual—artists have the power to generate critical discourse about the social and the ethical impacts of these new technologies in the world.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5919 - 5934"},"PeriodicalIF":4.7,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}