Pub Date : 2025-06-27DOI: 10.1007/s00146-025-02423-6
Yasser Pouresmaeil, Saleh Afroogh, Junfeng Jiao
This study maps the functions of artificial intelligence in disaster (mis)management. It begins with a classification of disasters in terms of their causal parameters, introducing hypothetical cases of independent or hybrid AI-caused disasters. We then overview the role of AI in disaster management and mismanagement, where the latter includes possible ethical repercussions of the use of AI in intelligent disaster management (IDM), as well as ways to prevent or mitigate these issues, which include pre-design a priori, in-design, and post-design methods as well as regulations. We then discuss the government’s role in preventing the ethical repercussions of AI use in IDM and identify and asses its deficits and challenges. This discussion is followed by an account of the advantages and disadvantages of pre-design or embedded ethics. Finally, we briefly consider the question of accountability and liability in AI-caused disasters.
{"title":"Mapping out AI functions in intelligent disaster (mis)management and AI-caused disasters","authors":"Yasser Pouresmaeil, Saleh Afroogh, Junfeng Jiao","doi":"10.1007/s00146-025-02423-6","DOIUrl":"10.1007/s00146-025-02423-6","url":null,"abstract":"<div><p>This study maps the functions of artificial intelligence in disaster (mis)management. It begins with a classification of disasters in terms of their causal parameters, introducing hypothetical cases of independent or hybrid AI-caused disasters. We then overview the role of AI in disaster management and mismanagement, where the latter includes possible ethical repercussions of the use of AI in intelligent disaster management (IDM), as well as ways to prevent or mitigate these issues, which include pre-design a priori, in-design, and post-design methods as well as regulations. We then discuss the government’s role in preventing the ethical repercussions of AI use in IDM and identify and asses its deficits and challenges. This discussion is followed by an account of the advantages and disadvantages of pre-design or embedded ethics. Finally, we briefly consider the question of accountability and liability in AI-caused disasters.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"505 - 526"},"PeriodicalIF":4.7,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1007/s00146-025-02434-3
Sungdoo Kim
AI technologies are revolutionizing hiring—traditionally a lengthy and painstaking process—by automating and streamlining recruitment workflows. Organizations are increasingly adopting AI solutions for their potential to enhance efficiency, objectivity, and accuracy in candidate selection. While existing research has largely centered on concerns about transparency and ethics, less attention has been paid to a more fundamental question: do algorithms help companies identify candidates who truly align with the unique work environment? Adopting a person–environment fit perspective, this article highlights two key barriers that hinder effective talent matching in AI-driven hiring: (1) an overemphasis on job-specific qualifications at the expense of cultural alignment, and (2) the marginalization of candidates through impersonal, automated processes. If left unaddressed, these issues can contribute to higher turnover, weakened organizational culture, and diminished employer branding. To mitigate these risks, the paper outlines three strategic-level recommendations: developing customized AI models that reflect organizational culture, training general AI models with large-scale organizational data, and enhancing the candidate experience through technology and human empathy.
{"title":"AI-driven hiring: a boon or a barrier to finding the right talent?","authors":"Sungdoo Kim","doi":"10.1007/s00146-025-02434-3","DOIUrl":"10.1007/s00146-025-02434-3","url":null,"abstract":"<div><p>AI technologies are revolutionizing hiring—traditionally a lengthy and painstaking process—by automating and streamlining recruitment workflows. Organizations are increasingly adopting AI solutions for their potential to enhance efficiency, objectivity, and accuracy in candidate selection. While existing research has largely centered on concerns about transparency and ethics, less attention has been paid to a more fundamental question: <i>do algorithms help companies identify candidates who truly align with the unique work environment?</i> Adopting a person–environment fit perspective, this article highlights two key barriers that hinder effective talent matching in AI-driven hiring: (1) an overemphasis on job-specific qualifications at the expense of cultural alignment, and (2) the marginalization of candidates through impersonal, automated processes. If left unaddressed, these issues can contribute to higher turnover, weakened organizational culture, and diminished employer branding. To mitigate these risks, the paper outlines three strategic-level recommendations: developing customized AI models that reflect organizational culture, training general AI models with large-scale organizational data, and enhancing the candidate experience through technology and human empathy.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"557 - 564"},"PeriodicalIF":4.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1007/s00146-025-02421-8
Ana Tomičić
This article explores the ethical and cultural complexities of AI chatbot development through an autoethnographic lens grounded in Actor–Network Theory (ANT) and Practice Theory. By benchmarking my experiences as a chatbot trainer against the Fairwork principles, a set of guidelines developed to ensure fair working conditions, I uncover the intricate interplay between freelance trainers, algorithms, and the broader AI industry. The study addresses two primary research questions: How do the lived experiences of chatbot trainers align with the Fairwork principles? What systemic challenges and ethical dilemmas arise in the context of AI training work? Key findings highlight significant challenges, such as inconsistent pay, overwork, and biased management practices, all exacerbated by systemic pressures prioritizing rapid development and profit over ethical considerations. ANT is utilized to analyze network dynamics among trainers, platforms, and employers, revealing how these interactions lead to ethical drift and the normalization of unfair labor practices. Practice Theory provides insights into the daily practices and pressures shaping the trainers’ work environment, contributing to stress and burnout. In addition, I apply the concept of “enshittification” to describe how profit-driven motives lead to the deterioration of working conditions and the quality of chatbot training, reflecting broader trends in digital labor platforms. Furthermore, I propose concrete recommendations for refining the Fairwork principles to better address the unique vulnerabilities faced by AI trainers.
{"title":"Benchmarking digital labor against Fairwork principles: an (auto)ethnography of chatbot training","authors":"Ana Tomičić","doi":"10.1007/s00146-025-02421-8","DOIUrl":"10.1007/s00146-025-02421-8","url":null,"abstract":"<div><p>This article explores the ethical and cultural complexities of AI chatbot development through an autoethnographic lens grounded in Actor–Network Theory (ANT) and Practice Theory. By benchmarking my experiences as a chatbot trainer against the Fairwork principles, a set of guidelines developed to ensure fair working conditions, I uncover the intricate interplay between freelance trainers, algorithms, and the broader AI industry. The study addresses two primary research questions: How do the lived experiences of chatbot trainers align with the Fairwork principles? What systemic challenges and ethical dilemmas arise in the context of AI training work? Key findings highlight significant challenges, such as inconsistent pay, overwork, and biased management practices, all exacerbated by systemic pressures prioritizing rapid development and profit over ethical considerations. ANT is utilized to analyze network dynamics among trainers, platforms, and employers, revealing how these interactions lead to ethical drift and the normalization of unfair labor practices. Practice Theory provides insights into the daily practices and pressures shaping the trainers’ work environment, contributing to stress and burnout. In addition, I apply the concept of “enshittification” to describe how profit-driven motives lead to the deterioration of working conditions and the quality of chatbot training, reflecting broader trends in digital labor platforms. Furthermore, I propose concrete recommendations for refining the Fairwork principles to better address the unique vulnerabilities faced by AI trainers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6149 - 6163"},"PeriodicalIF":4.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1007/s00146-025-02404-9
Sergio Torres-Martínez
This paper explores the convergence of Agentive Cognitive Construction Grammar (AgCCxG) and neuro-symbolic AI (NSAI) for modeling human cognition and language processing. AgCCxG conceptualizes language as an embodied, predictive, and semiotic system operating through a Markov Blanket, structuring cognition via differentiation, optimization, and predictive control. NSAI integrates neural networks’ pattern recognition with symbolic AI’s reasoning capabilities, mirroring dual-system models of human cognition. I argue that AgCCxG provides a neurobiologically plausible foundation for enhancing NSAI’s predictive modeling, enabling AI to progress from statistical correlation toward meaning-driven computation. By incorporating semiotic agency, embodied inference, and context-aware reasoning, this integration advances explainable AI, scientific discovery, and personalized education. The synergy addresses critical challenges including the hallucination problem, with symbolic reasoning serving as a corrective mechanism for neural outputs. The future of artificial intelligence requires principled integration of predictive processing, semiotic agency, and embodied cognition—principles that have shaped human language and thought for millennia. This represents a significant step toward bridging human and machine intelligence in more theoretically sound and ethically responsible ways.
{"title":"Bridging embodied cognition and AI: Agentive Cognitive Construction Grammar as a backing theory for neuro-symbolic AI","authors":"Sergio Torres-Martínez","doi":"10.1007/s00146-025-02404-9","DOIUrl":"10.1007/s00146-025-02404-9","url":null,"abstract":"<div><p>This paper explores the convergence of <i>Agentive Cognitive Construction Grammar</i> (AgCCxG) and <i>neuro-symbolic AI</i> (NSAI) for modeling human cognition and language processing. AgCCxG conceptualizes language as an embodied, predictive, and semiotic system operating through a Markov Blanket, structuring cognition via differentiation, optimization, and predictive control. NSAI integrates neural networks’ pattern recognition with symbolic AI’s reasoning capabilities, mirroring dual-system models of human cognition. I argue that AgCCxG provides a neurobiologically plausible foundation for enhancing NSAI’s predictive modeling, enabling AI to progress from statistical correlation toward meaning-driven computation. By incorporating semiotic agency, embodied inference, and context-aware reasoning, this integration advances explainable AI, scientific discovery, and personalized education. The synergy addresses critical challenges including the hallucination problem, with symbolic reasoning serving as a corrective mechanism for neural outputs. The future of artificial intelligence requires principled integration of predictive processing, semiotic agency, and embodied cognition—principles that have shaped human language and thought for millennia. This represents a significant step toward bridging human and machine intelligence in more theoretically sound and ethically responsible ways.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6455 - 6476"},"PeriodicalIF":4.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-26DOI: 10.1007/s00146-025-02437-0
Bei Zhang
This study examines how emotional expressions shape public discourse on AI ethics within China’s algorithmically curated and policy-regulated digital environment. Analyzing 20,037 Weibo posts from August 2023 to January 2025, we compare institutional and ordinary users’ engagement during key regulatory shifts and the launch of the domestic model DeepSeek-R1. Using LDA topic modeling and sentiment analysis, we find that institutional actors promote compliance-focused narratives, while ordinary users mobilize eight distinct emotions—particularly satire, outrage, and anxiety—identified through an extended NRC lexicon, to contest issues such as misinformation, plagiarism, and creative authorship. Notably, posts invoking existential or justice-oriented concerns receive up to 7.3 times more engagement than policy-aligned content, with Gold V accounts averaging 117.95 likes per post compared to 16.09 for Blue V accounts. The viral #AICheating campaign, which garnered 2.3 billion views, illustrates how affective discourse can pressure policy adjustments, contributing to national academic integrity reforms in 2024. We propose the concept of affective counterpublics under constraint—a theoretical framework that extends Fraser’s notion of subaltern counterpublics by integrating algorithmic resistance—to explain how emotionally driven grassroots mobilization operates within authoritarian platform governance. This reframes emotional expression as a form of embodied public reasoning capable of recalibrating policy attention under algorithmic suppression.
{"title":"Affective counterpublics under constraint: emotion, platform governance, and AI ethics discourse on Chinese social media","authors":"Bei Zhang","doi":"10.1007/s00146-025-02437-0","DOIUrl":"10.1007/s00146-025-02437-0","url":null,"abstract":"<div><p>This study examines how emotional expressions shape public discourse on AI ethics within China’s algorithmically curated and policy-regulated digital environment. Analyzing 20,037 Weibo posts from August 2023 to January 2025, we compare institutional and ordinary users’ engagement during key regulatory shifts and the launch of the domestic model DeepSeek-R1. Using LDA topic modeling and sentiment analysis, we find that institutional actors promote compliance-focused narratives, while ordinary users mobilize eight distinct emotions—particularly satire, outrage, and anxiety—identified through an extended NRC lexicon, to contest issues such as misinformation, plagiarism, and creative authorship. Notably, posts invoking existential or justice-oriented concerns receive up to 7.3 times more engagement than policy-aligned content, with Gold V accounts averaging 117.95 likes per post compared to 16.09 for Blue V accounts. The viral #AICheating campaign, which garnered 2.3 billion views, illustrates how affective discourse can pressure policy adjustments, contributing to national academic integrity reforms in 2024. We propose the concept of affective counterpublics under constraint—a theoretical framework that extends Fraser’s notion of subaltern counterpublics by integrating algorithmic resistance—to explain how emotionally driven grassroots mobilization operates within authoritarian platform governance. This reframes emotional expression as a form of embodied public reasoning capable of recalibrating policy attention under algorithmic suppression.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"595 - 609"},"PeriodicalIF":4.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1007/s00146-025-02430-7
Francis Lee
This article develops an analytical and methodological field guide for studying the mundane practices that constitute machine learning systems. Drawing on science and technology studies (STS), I move beyond the opacity/transparency dichotomy that has dominated critical algorithm studies to examine how machine learning is assembled through everyday work. Rather than treating algorithms as black boxes or magical entities, I focus on four empirical moments of translation—feature extraction, vectorization, clustering, and data drift—where technical work becomes political choice. By ethnographically attending to practitioners' tinkering, negotiations, and valuation practices in these moments, we can trace how classification systems are constructed and stabilized. This approach allows us to ask: How are particular features of the world selected as relevant for prediction? Through what practices are people and phenomena translated into mathematical vector spaces? How are temporal assumptions encoded in data? By studying these mundane processes of construction, we can understand how machine learning systems enact particular ways of seeing, classifying, and predicting the world. This field guide thus contributes methodological tools for analyzing how the politics of machine learning is assembled in practice, opening analytical space for critical engagement beyond calls for transparency or fairness.
{"title":"The practices and politics of machine learning: a field guide for analyzing artificial intelligence","authors":"Francis Lee","doi":"10.1007/s00146-025-02430-7","DOIUrl":"10.1007/s00146-025-02430-7","url":null,"abstract":"<div><p>This article develops an analytical and methodological field guide for studying the mundane practices that constitute machine learning systems. Drawing on science and technology studies (STS), I move beyond the opacity/transparency dichotomy that has dominated critical algorithm studies to examine how machine learning is assembled through everyday work. Rather than treating algorithms as black boxes or magical entities, I focus on four empirical moments of translation—feature extraction, vectorization, clustering, and data drift—where technical work becomes political choice. By ethnographically attending to practitioners' tinkering, negotiations, and valuation practices in these moments, we can trace how classification systems are constructed and stabilized. This approach allows us to ask: How are particular features of the world selected as relevant for prediction? Through what practices are people and phenomena translated into mathematical vector spaces? How are temporal assumptions encoded in data? By studying these mundane processes of construction, we can understand how machine learning systems enact particular ways of seeing, classifying, and predicting the world. This field guide thus contributes methodological tools for analyzing how the politics of machine learning is assembled in practice, opening analytical space for critical engagement beyond calls for transparency or fairness.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6135 - 6148"},"PeriodicalIF":4.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02430-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1007/s00146-025-02403-w
Marit MacArthur
This article offers broadly useful guidance for society’s adaptation to the omnipresence of generative AI, with implications for every profession and academic discipline that involves writing or coding (recognized by some as a form of writing). Offering an interdisciplinary perspective grounded in the digital humanities, software development and writing across the curriculum, and building on performance historian Christopher Grobe’s research on the role of arts and humanities expertise in AI development, I offer redefinitions of training data and prompt engineering. These essential yet misleading terms obscure the critical roles that humanities-based expertise has played in the development of GPTs and must play in guiding society’s adaptation to generative AI. I also briefly review scholarship on what constitutes “writing” and what it means to teach writing. Next, I reflect on long-terms trends, in professional software development, of code sharing and reliance on automation, and the likely impact of imposing similar practices in professional writing. After identifying the fundamental problem of rhetorical debt and outlining its consequences, I further motivate my argument, in relation to the new economic value of expert writing. This new economic value necessitates a revaluation of the humanities—not only by computer science, the tech industry, and schools and universities, but by humanists themselves.
{"title":"Large language models and the problem of rhetorical debt","authors":"Marit MacArthur","doi":"10.1007/s00146-025-02403-w","DOIUrl":"10.1007/s00146-025-02403-w","url":null,"abstract":"<div><p>This article offers broadly useful guidance for society’s adaptation to the omnipresence of generative AI, with implications for every profession and academic discipline that involves writing or coding (recognized by some as a form of writing). Offering an interdisciplinary perspective grounded in the digital humanities, software development and writing across the curriculum, and building on performance historian Christopher Grobe’s research on the role of arts and humanities expertise in AI development, I offer redefinitions of <i>training data</i> and <i>prompt engineering</i>. These essential yet misleading terms obscure the critical roles that humanities-based expertise has played in the development of GPTs and must play in guiding society’s adaptation to generative AI. I also briefly review scholarship on what constitutes “writing” and what it means to teach writing. Next, I reflect on long-terms trends, in professional software development, of code sharing and reliance on automation, and the likely impact of imposing similar practices in professional writing. After identifying the fundamental problem of rhetorical debt and outlining its consequences, I further motivate my argument, in relation to the new economic value of expert writing. This new economic value necessitates a revaluation of the humanities—not only by computer science, the tech industry, and schools and universities, but by humanists themselves.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6425 - 6438"},"PeriodicalIF":4.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02403-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1007/s00146-025-02414-7
Ciano Aydin, Luca Possati
This paper explores how data mining can redefine and reshape human identity, transforming the self into a fluid construct continuously shaped through ongoing profiling. To understand this transformation, we draw on Lacan’s theory of subjectivity and his conception of desire as the engine of subjectivity. Rejecting essentialist notions of the self, Lacan argues that identity is formed within—and through—social, cultural, and, as we emphasize here, technological contexts. We examine how data mining affects processes of self-formation in this technological era, using Lacanian theory as a framework to analyze its impact. We argue that data mining does not simply replicate traditional symbolic processes; rather, it introduces a different dynamic that can disrupt established modes of symbolic identification rooted in social norms, laws, and customs. This disruption may result in forms of de-identification but also opens the possibility for new types of self-identification. We propose that this transformation has a double effect: it both dissolves elements of the traditional Symbolic order and simultaneously gives rise to a new Symbolic—one that aims to define and regulate emerging identities. We believe that this tension presents both a challenge and an opportunity for contemporary processes of self-formation.
{"title":"Less and more than data: a Lacanian inquiry into self-formation in the age of data mining","authors":"Ciano Aydin, Luca Possati","doi":"10.1007/s00146-025-02414-7","DOIUrl":"10.1007/s00146-025-02414-7","url":null,"abstract":"<div><p>This paper explores how data mining can redefine and reshape human identity, transforming the self into a fluid construct continuously shaped through ongoing profiling. To understand this transformation, we draw on Lacan’s theory of subjectivity and his conception of desire as the engine of subjectivity. Rejecting essentialist notions of the self, Lacan argues that identity is formed within—and through—social, cultural, and, as we emphasize here, technological contexts. We examine how data mining affects processes of self-formation in this technological era, using Lacanian theory as a framework to analyze its impact. We argue that data mining does not simply replicate traditional symbolic processes; rather, it introduces a different dynamic that can disrupt established modes of symbolic identification rooted in social norms, laws, and customs. This disruption may result in forms of de-identification but also opens the possibility for new types of self-identification. We propose that this transformation has a double effect: it both dissolves elements of the traditional Symbolic order and simultaneously gives rise to a new Symbolic—one that aims to define and regulate emerging identities. We believe that this tension presents both a challenge and an opportunity for contemporary processes of self-formation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6123 - 6134"},"PeriodicalIF":4.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02414-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1007/s00146-025-02432-5
Oshri Bar-Gil
This article examines the transformative impact of AI-based art generators by extending Walter Benjamin’s arguments on mechanical reproduction to the digital age. While Benjamin examined how mechanical reproduction affected works created with clear human intentionality, AI generated art introduces a fundamentally different dynamic through ‘distributed agency’ across human prompters, algorithmic interpretation mechanisms, and collective training datasets.
Through an analysis of four key examples that illustrate different aspects of AI’s influence on artistic practice—generative AI art platforms, the Portrait of Edmond de Belamy, Refik Anadol’s Archive Dreaming, and The 2023 Sony World Photography Awards controversy—the study advances four interconnected arguments: first, that generative AI reconfigures creative agency beyond traditional human-centered models; second, that AI establishes new dialogic relationships between creators, artworks, and audiences; third, that algorithmic generation differs fundamentally from mechanical reproduction by creating novel interpretative expressions rather than duplicating existing works; and fourth, that AI transforms the societal dimensions of artistic production through a dialectical relationship between democratization and proletarianization.
By critically extending Benjamin’s framework to address contemporary technological conditions, this study provides theoretical foundations for understanding art in an age of algorithmic creation. The findings reveal how AI both fulfills and challenges Benjamin’s predictions about technological art reproduction while creating new epistemic and sociotechnical configurations that require reconceptualizing traditional notions of artistic authenticity, creative agency, and cultural preservation in an era of increasing algorithmic mediation.
{"title":"The transformation of artistic creation: from Benjamin’s reproduction to AI generation","authors":"Oshri Bar-Gil","doi":"10.1007/s00146-025-02432-5","DOIUrl":"10.1007/s00146-025-02432-5","url":null,"abstract":"<div><p>This article examines the transformative impact of AI-based art generators by extending Walter Benjamin’s arguments on mechanical reproduction to the digital age. While Benjamin examined how mechanical reproduction affected works created with clear human intentionality, AI generated art introduces a fundamentally different dynamic through ‘distributed agency’ across human prompters, algorithmic interpretation mechanisms, and collective training datasets.</p><p>Through an analysis of four key examples that illustrate different aspects of AI’s influence on artistic practice—generative AI art platforms, the <i>Portrait of Edmond de Belamy</i>, Refik Anadol’s <i>Archive Dreaming</i>, and The 2023 Sony World Photography Awards controversy—the study advances four interconnected arguments: first, that generative AI reconfigures creative agency beyond traditional human-centered models; second, that AI establishes new dialogic relationships between creators, artworks, and audiences; third, that algorithmic generation differs fundamentally from mechanical reproduction by creating novel interpretative expressions rather than duplicating existing works; and fourth, that AI transforms the societal dimensions of artistic production through a dialectical relationship between democratization and proletarianization.</p><p>By critically extending Benjamin’s framework to address contemporary technological conditions, this study provides theoretical foundations for understanding art in an age of algorithmic creation. The findings reveal how AI both fulfills and challenges Benjamin’s predictions about technological art reproduction while creating new epistemic and sociotechnical configurations that require reconceptualizing traditional notions of artistic authenticity, creative agency, and cultural preservation in an era of increasing algorithmic mediation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6439 - 6453"},"PeriodicalIF":4.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02432-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-24DOI: 10.1007/s00146-025-02428-1
Tomás Rosa, Leandro Pereira, José Crespo de Carvalho, Rui Vinhas da Silva, Ana Simões
This study investigates the impact of Artificial Intelligence (AI) on society, business, and management, using a qualitative approach centered on the analysis of interviews and a review of literature. Text mining techniques were applied through the KH Coder tool, allowing for a detailed exploration of how AI is transforming these three dimensions. The results reveal significant changes in management practices, deep economic impacts, and relevant social changes brought about by the rapid adoption of AI. The originality of this study lies in the combination of qualitative analysis with the exploration of textual data, providing a comprehensive view of the ethical and practical implications of AI. It also acknowledges limitations, such as the rapid pace of technological development and the potential bias in the perceptions collected. This work contributes to a better understanding of the challenges and opportunities presented by AI, and suggests pathways for ethical and effective integration.
{"title":"From risk to reward: AI’s role in shaping tomorrow’s economy and society","authors":"Tomás Rosa, Leandro Pereira, José Crespo de Carvalho, Rui Vinhas da Silva, Ana Simões","doi":"10.1007/s00146-025-02428-1","DOIUrl":"10.1007/s00146-025-02428-1","url":null,"abstract":"<div><p>This study investigates the impact of Artificial Intelligence (AI) on society, business, and management, using a qualitative approach centered on the analysis of interviews and a review of literature. Text mining techniques were applied through the KH Coder tool, allowing for a detailed exploration of how AI is transforming these three dimensions. The results reveal significant changes in management practices, deep economic impacts, and relevant social changes brought about by the rapid adoption of AI. The originality of this study lies in the combination of qualitative analysis with the exploration of textual data, providing a comprehensive view of the ethical and practical implications of AI. It also acknowledges limitations, such as the rapid pace of technological development and the potential bias in the perceptions collected. This work contributes to a better understanding of the challenges and opportunities presented by AI, and suggests pathways for ethical and effective integration.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6097 - 6121"},"PeriodicalIF":4.7,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02428-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}