Pub Date : 2025-06-24DOI: 10.1007/s00146-025-02410-x
Jurgis Peters
The adoption of generative AI technology in visual arts echoes the transformational process initiated by early 20th-century avant-garde movements such as Constructivism and Dadaism. By utilising technological advances of their time avant-garde artists redefine the role of an artist and what could be considered as artwork. Written from the perspective of an art practitioner and researcher, this paper explores how contemporary artists working with AI continue the radical and experimental spirit that characterised early avant-garde. The re-evaluation of artist roles from sole creators to engineers-collaborators and curators in an AI-mediated creative process underscores a shift in the artistic practice. Through detailed case studies of three contemporary artists, the paper illustrates how generative AI is not only used to create artwork but also to critique technological, cultural, and societal systems. Additionally, it addresses ethical concerns such as AI bias, data commodification, and the environmental impact of AI technologies, situating contemporary generative AI practices within the broader context of art's evolving societal role. Ultimately, the paper underscores the transformation of artistic practice in the digital age, where AI becomes both a creative tool and a subject of critical reflection.
{"title":"Generative AI and the avant-garde: bridging historical innovation with contemporary art","authors":"Jurgis Peters","doi":"10.1007/s00146-025-02410-x","DOIUrl":"10.1007/s00146-025-02410-x","url":null,"abstract":"<div><p>The adoption of generative AI technology in visual arts echoes the transformational process initiated by early 20th-century avant-garde movements such as Constructivism and Dadaism. By utilising technological advances of their time avant-garde artists redefine the role of an artist and what could be considered as artwork. Written from the perspective of an art practitioner and researcher, this paper explores how contemporary artists working with AI continue the radical and experimental spirit that characterised early avant-garde. The re-evaluation of artist roles from sole creators to engineers-collaborators and curators in an AI-mediated creative process underscores a shift in the artistic practice. Through detailed case studies of three contemporary artists, the paper illustrates how generative AI is not only used to create artwork but also to critique technological, cultural, and societal systems. Additionally, it addresses ethical concerns such as AI bias, data commodification, and the environmental impact of AI technologies, situating contemporary generative AI practices within the broader context of art's evolving societal role. Ultimately, the paper underscores the transformation of artistic practice in the digital age, where AI becomes both a creative tool and a subject of critical reflection.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6407 - 6424"},"PeriodicalIF":4.7,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02410-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-21DOI: 10.1007/s00146-025-02427-2
Nick Schuster, Daniel Kilov
AI systems are increasingly in a position to have deep and systemic impacts on human wellbeing. Projects in value alignment, a critical area of AI safety research, must ultimately aim to ensure that all those who stand to be affected by such systems have good reason to accept their outputs. This is especially challenging where AI systems are involved in making morally controversial decisions. In this paper, we consider three current approaches to value alignment: crowdsourcing, reinforcement learning from human feedback, and constitutional AI. We argue that all three fail to accommodate reasonable moral disagreement, since they provide neither good epistemic reasons nor good political reasons for accepting AI systems’ morally controversial outputs. Since these appear to be the most promising approaches to value alignment currently on offer, we conclude that accommodating reasonable moral disagreement remains an open problem for AI safety, and we offer guidance for future research.
{"title":"Moral disagreement and the limits of AI value alignment: a dual challenge of epistemic justification and political legitimacy","authors":"Nick Schuster, Daniel Kilov","doi":"10.1007/s00146-025-02427-2","DOIUrl":"10.1007/s00146-025-02427-2","url":null,"abstract":"<div><p>AI systems are increasingly in a position to have deep and systemic impacts on human wellbeing. Projects in value alignment, a critical area of AI safety research, must ultimately aim to ensure that all those who stand to be affected by such systems have good reason to accept their outputs. This is especially challenging where AI systems are involved in making morally controversial decisions. In this paper, we consider three current approaches to value alignment: crowdsourcing, reinforcement learning from human feedback, and constitutional AI. We argue that all three fail to accommodate reasonable moral disagreement, since they provide neither good epistemic reasons nor good political reasons for accepting AI systems’ morally controversial outputs. Since these appear to be the most promising approaches to value alignment currently on offer, we conclude that accommodating reasonable moral disagreement remains an open problem for AI safety, and we offer guidance for future research.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6073 - 6087"},"PeriodicalIF":4.7,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02427-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-19DOI: 10.1007/s00146-025-02411-w
Anastasia Nefeli Vidaki, Vagelis Papakonstantinou
Concerns have been expressed regarding the impact of automation procedures and penetration of new technologies into the judicial field on fundamental rights, democratic values and the notion of legitimacy in general. There are particular risks posed to the legitimate judicial decision-making and the rights of the parties of court proceedings. This paper examines the complex relationship between the artificial intelligence (AI) and the democratic legitimacy of judicial decision-making. While AI systems have been introduced in various areas of public administration to support law application and public policy, their role in the judiciary raises distinct questions about the legitimacy of algorithmic influence on adjudication. Normally, traditional judicial legitimacy is grounded in principles of impartiality, transparency and reasoned justification, which AI systems challenge by potentially disrupting these core democratic tenets. There lies a possibility that biased algorithms will be deployed in justice. The judges and their impartial and independent thinking and reasoning will be crowded out and the judiciary will be gradually replaced by machines reaching a decision based on statistics rather than an individualized assessment. This, not that far-fetched scenario, seems menacing for the whole democratic structure and idea. This paper reviews theoretical perspectives on democratic legitimacy, focusing on the contrasting views of judicial authority as either an undemocratic imposition on political rights or as a consensual safeguard for fundamental rights within a democratic context. Unlike previous studies that examine the raised topics in isolation, this paper provides a comprehensive framework that evaluates the diverse degrees of AI automation and how they affect impartiality, publicity and reasoning. It goes further by exploring its possible threats to those aspects of democratic legitimacy and suggesting some possible solutions to counterbalance them. Despite the doubts over the compatibility between AI and democratic ideals, this paper contributes an innovative hybrid model for judicial decision-making that integrates human oversight with AI assistance, seeking to reconcile the benefits of AI with the need to uphold democratic principles within the judicial review process. This approach aims to fill a critical gap in the current literature by directly confronting challenges and opportunities presented by AI in judicial contexts, with a view to sustaining democratic values in a future where the role of AI in the judiciary is likely to expand.
{"title":"Democratic legitimacy of AI in judicial decision-making","authors":"Anastasia Nefeli Vidaki, Vagelis Papakonstantinou","doi":"10.1007/s00146-025-02411-w","DOIUrl":"10.1007/s00146-025-02411-w","url":null,"abstract":"<div><p>Concerns have been expressed regarding the impact of automation procedures and penetration of new technologies into the judicial field on fundamental rights, democratic values and the notion of legitimacy in general. There are particular risks posed to the legitimate judicial decision-making and the rights of the parties of court proceedings. This paper examines the complex relationship between the artificial intelligence (AI) and the democratic legitimacy of judicial decision-making. While AI systems have been introduced in various areas of public administration to support law application and public policy, their role in the judiciary raises distinct questions about the legitimacy of algorithmic influence on adjudication. Normally, traditional judicial legitimacy is grounded in principles of impartiality, transparency and reasoned justification, which AI systems challenge by potentially disrupting these core democratic tenets. There lies a possibility that biased algorithms will be deployed in justice. The judges and their impartial and independent thinking and reasoning will be crowded out and the judiciary will be gradually replaced by machines reaching a decision based on statistics rather than an individualized assessment. This, not that far-fetched scenario, seems menacing for the whole democratic structure and idea. This paper reviews theoretical perspectives on democratic legitimacy, focusing on the contrasting views of judicial authority as either an undemocratic imposition on political rights or as a consensual safeguard for fundamental rights within a democratic context. Unlike previous studies that examine the raised topics in isolation, this paper provides a comprehensive framework that evaluates the diverse degrees of AI automation and how they affect impartiality, publicity and reasoning. It goes further by exploring its possible threats to those aspects of democratic legitimacy and suggesting some possible solutions to counterbalance them. Despite the doubts over the compatibility between AI and democratic ideals, this paper contributes an innovative hybrid model for judicial decision-making that integrates human oversight with AI assistance, seeking to reconcile the benefits of AI with the need to uphold democratic principles within the judicial review process. This approach aims to fill a critical gap in the current literature by directly confronting challenges and opportunities presented by AI in judicial contexts, with a view to sustaining democratic values in a future where the role of AI in the judiciary is likely to expand.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6025 - 6035"},"PeriodicalIF":4.7,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-14DOI: 10.1007/s00146-025-02384-w
Tim Hector
This article develops the concept of joint journeys as a metaphor to analyze how smart speakers become embedded in everyday domestic life and to trace the reciprocal, linguistically-mediated processes of domestication. While the domestication framework is well established in media studies, AI-based, networked technologies like smart speakers challenge its underlying assumptions by connecting private households to global infrastructures, thereby blurring boundaries between the public and the private. Drawing on video and audio recordings from German households, the article explores how conversational linguistic practices contribute to the domestication of smart speakers. Using methods from ethnomethodological conversation analysis and interactional linguistics, the study traces how smart speakers become integrated into everyday life, not just materially and functionally but also discursively, through practices relating to placement decisions, adaptation to sequential structures, personalization features, and reactions to malfunction. The article shows that mutual accommodation takes place: while users adapt their language to interface constraints, devices also get ‘personalized’ towards their users. The metaphor of joint journeys emphasizes that the co-evolution of users and devices is an ongoing, non-linear expedition shaped by language, socio-material environments, and infrastructural logics. These observations make it clear that it is through practices and language that AI technologies become integrated into everyday culture, which also raises questions about the broader datafied ecosystems to which interactions with them contribute.
{"title":"Joint journeys: the linguistic domestication of smart speakers and their users in interaction","authors":"Tim Hector","doi":"10.1007/s00146-025-02384-w","DOIUrl":"10.1007/s00146-025-02384-w","url":null,"abstract":"<div><p>This article develops the concept of <i>joint journeys</i> as a metaphor to analyze how smart speakers become embedded in everyday domestic life and to trace the reciprocal, linguistically-mediated processes of domestication. While the domestication framework is well established in media studies, AI-based, networked technologies like smart speakers challenge its underlying assumptions by connecting private households to global infrastructures, thereby blurring boundaries between the public and the private. Drawing on video and audio recordings from German households, the article explores how conversational linguistic practices contribute to the domestication of smart speakers. Using methods from ethnomethodological conversation analysis and interactional linguistics, the study traces how smart speakers become integrated into everyday life, not just materially and functionally but also discursively, through practices relating to placement decisions, adaptation to sequential structures, personalization features, and reactions to malfunction. The article shows that mutual accommodation takes place: while users adapt their language to interface constraints, devices also get ‘personalized’ towards their users. The metaphor of joint journeys emphasizes that the co-evolution of users and devices is an ongoing, non-linear expedition shaped by language, socio-material environments, and infrastructural logics. These observations make it clear that it is through practices and language that AI technologies become integrated into everyday culture, which also raises questions about the broader datafied ecosystems to which interactions with them contribute.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6037 - 6057"},"PeriodicalIF":4.7,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02384-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-14DOI: 10.1007/s00146-025-02389-5
Bettina Migge, Britta Schneider
Although AI language technologies are typically presented as future-oriented technological innovation, none of the elements of machine learning technologies are unaffected by the cultural and historical contexts of their emergence. This is particularly true in the case of language constructions and the materialization of language in AI. Examination of computational language culture reveals striking continuities to concepts of language and their materialization in technology settings in the history of European colonialism. Based on an in-depth analysis of how languages were materially produced in colonialism and are treated in AI technologies, we show the strong colonial continuities in language materialization processes to this day. This also indicates the crucial role that language materializations play in the construction and maintenance of power and social order in a global realm.
{"title":"The material making of language as practice of global domination and control: continuations from European colonialism to AI","authors":"Bettina Migge, Britta Schneider","doi":"10.1007/s00146-025-02389-5","DOIUrl":"10.1007/s00146-025-02389-5","url":null,"abstract":"<div><p>Although AI language technologies are typically presented as future-oriented technological innovation, none of the elements of machine learning technologies are unaffected by the cultural and historical contexts of their emergence. This is particularly true in the case of language constructions and the materialization of language in AI. Examination of computational language culture reveals striking continuities to concepts of language and their materialization in technology settings in the history of European colonialism. Based on an in-depth analysis of how languages were materially produced in colonialism and are treated in AI technologies, we show the strong colonial continuities in language materialization processes to this day. This also indicates the crucial role that language materializations play in the construction and maintenance of power and social order in a global realm.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6059 - 6071"},"PeriodicalIF":4.7,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02389-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13DOI: 10.1007/s00146-025-02393-9
Peter Stewart
This article aims to affirm and instantiate the main accounts showing intrinsic limitations of artificial intelligence computing in a real world of organisms, people and speech. It is argued that these limits mainly concern non-ergodic (or non-repeating) phenomena. This paper aims to extend the debate on the limits of AI through a preliminary examination of the dispersion of both regularities and non-ergodic phenomena and processes in both society and human persons. It is argued that regularities and non-ergodic processes are deeply intertwined. Social regularity, for example from the built environment and conformity, is discussed. In society, non-ergodicity is especially found in the lifeworld of speech and intersubjectivity. The human person creates non-ergodicity through numerous routes. Individual regularities are seen in things such as habit and routine. This study asserts that human intersubjective life in the often nonergodic lifeworld and inbuilt non-repeating dimensions of an individual’s living out of the world, should be recognized as extensive areas where AI prediction will be weak. It is hypothesized that the intensity of non-ergodicity in phenomena is a firm indicator of weak AI prediction, and that most successful AI prediction of social phenomena predominantly reflects the sort of social regularities discussed in this article.
{"title":"The prediction of non-ergodic humanity by artificial intelligence","authors":"Peter Stewart","doi":"10.1007/s00146-025-02393-9","DOIUrl":"10.1007/s00146-025-02393-9","url":null,"abstract":"<div><p>This article aims to affirm and instantiate the main accounts showing intrinsic limitations of artificial intelligence computing in a real world of organisms, people and speech. It is argued that these limits mainly concern non-ergodic (or non-repeating) phenomena. This paper aims to extend the debate on the limits of AI through a preliminary examination of the dispersion of both regularities and non-ergodic phenomena and processes in both society and human persons. It is argued that regularities and non-ergodic processes are deeply intertwined. Social regularity, for example from the built environment and conformity, is discussed. In society, non-ergodicity is especially found in the lifeworld of speech and intersubjectivity. The human person creates non-ergodicity through numerous routes. Individual regularities are seen in things such as habit and routine. This study asserts that human intersubjective life in the often nonergodic lifeworld and inbuilt non-repeating dimensions of an individual’s living out of the world, should be recognized as extensive areas where AI prediction will be weak. It is hypothesized that the intensity of non-ergodicity in phenomena is a firm indicator of weak AI prediction, and that most successful AI prediction of social phenomena predominantly reflects the sort of social regularities discussed in this article.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5999 - 6010"},"PeriodicalIF":4.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02393-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13DOI: 10.1007/s00146-025-02402-x
Xiaoyao Han, Oskar J. Gstrein, Vasilios Andrikopoulos, Ronald Stolk
Analyzed through the lens of the “computerization movement” (CM), the development of revolutionary technologies has consistently followed a recurring trajectory in terms of the origin, momentum, diffusion, and societal impact. Building on the analysis of selected historical trajectories, similar dynamics are discernible for the recent push towards the adoption of artificial intelligence (AI), being enhanced by capabilities provided by Big Data infrastructure. This paper explores Big Data and AI within the framework of CMs, analyzing their driving visions, trajectories, interconnectedness, and the societal discourses formed around their adoption. By drawing parallels with selected past CMs and situating current events within such historical context, this study provides a novel perspective hopefully facilitating a better understanding of the current technological landscape, and aiding in the navigation of the complex interplay between innovation, social change, and human expectations. The study shows that even if technological innovations remain central for the recent push towards AI adoption, shared beliefs and visionary ideals underpinning adoption are equally influential. These beliefs and ideals have continually mobilized people around the relevance of AI—in the past and today—even as the supporting infrastructure, core technologies, and their relevance for society have evolved.
{"title":"Every wave carries a sense of déjà vu: revisiting the computerization movement perspective to understand the recent push towards artificial intelligence","authors":"Xiaoyao Han, Oskar J. Gstrein, Vasilios Andrikopoulos, Ronald Stolk","doi":"10.1007/s00146-025-02402-x","DOIUrl":"10.1007/s00146-025-02402-x","url":null,"abstract":"<div><p>Analyzed through the lens of the “computerization movement” (CM), the development of revolutionary technologies has consistently followed a recurring trajectory in terms of the origin, momentum, diffusion, and societal impact. Building on the analysis of selected historical trajectories, similar dynamics are discernible for the recent push towards the adoption of artificial intelligence (AI), being enhanced by capabilities provided by Big Data infrastructure. This paper explores Big Data and AI within the framework of CMs, analyzing their driving visions, trajectories, interconnectedness, and the societal discourses formed around their adoption. By drawing parallels with selected past CMs and situating current events within such historical context, this study provides a novel perspective hopefully facilitating a better understanding of the current technological landscape, and aiding in the navigation of the complex interplay between innovation, social change, and human expectations. The study shows that even if technological innovations remain central for the recent push towards AI adoption, shared beliefs and visionary ideals underpinning adoption are equally influential. These beliefs and ideals have continually mobilized people around the relevance of AI—in the past and today—even as the supporting infrastructure, core technologies, and their relevance for society have evolved. </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6379 - 6391"},"PeriodicalIF":4.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02402-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-10DOI: 10.1007/s00146-025-02408-5
Vian Bakir, Andrew McStay
Riffing off move fast and break things, the internal motto coined by Meta’s Mark Zuckerberg, this paper examines the ethical dimensions of human relationships with AI companions, focusing on Character.ai—a platform where users interact with AI-generated ‘characters’ ranging from fictional figures to representations of real people. Drawing on an assessment of the platform’s design, and the first civil lawsuit brought against Character.ai in the USA in 2024 following the suicide of a teenage user, this paper identifies unresolved ethical issues in companion-based AI technologies. These include risks from difficulty in separating AI-based roleplay from real life, unconstrained AI models performing edgy characters, reality detachment, and confusion by dishonest anthropomorphism and emulated empathy. All have implications for safety measures for vulnerable users. While acknowledging the potential benefits of AI companions, this paper argues for the urgent need for ethical frameworks that balance innovation with user safety. By proposing actionable recommendations for design and governance, the paper aims to guide industry, policymakers, and scholars in fostering safer and more responsible AI companion platforms.
引用Meta的马克·扎克伯格(Mark Zuckerberg)提出的“快速行动和打破现状”(move fast and break things)的内部格言,本文考察了人类与人工智能伙伴关系的伦理维度,重点是性格。人工智能——用户与人工智能生成的“角色”互动的平台,从虚构的人物到真人的代表。根据对该平台设计的评估,以及对Character提起的第一起民事诉讼。在一名青少年用户自杀后,本文确定了基于同伴的人工智能技术中尚未解决的伦理问题。这些风险包括难以将基于人工智能的角色扮演与现实生活分离,不受约束的人工智能模型执行尖锐角色,现实脱离,以及由不诚实的拟人化和模拟同理心造成的混乱。所有这些都对脆弱用户的安全措施有影响。在承认人工智能伴侣的潜在好处的同时,本文认为迫切需要一个平衡创新与用户安全的伦理框架。通过为设计和治理提出可操作的建议,本文旨在指导行业、政策制定者和学者培育更安全、更负责任的人工智能伴侣平台。
{"title":"Move fast and break people? Ethics, companion apps, and the case of Character.ai","authors":"Vian Bakir, Andrew McStay","doi":"10.1007/s00146-025-02408-5","DOIUrl":"10.1007/s00146-025-02408-5","url":null,"abstract":"<div><p>Riffing off <i>move fast and break things</i>, the internal motto coined by Meta’s Mark Zuckerberg, this paper examines the ethical dimensions of human relationships with AI companions, focusing on Character.ai—a platform where users interact with AI-generated ‘characters’ ranging from fictional figures to representations of real people. Drawing on an assessment of the platform’s design, and the first civil lawsuit brought against Character.ai in the USA in 2024 following the suicide of a teenage user, this paper identifies unresolved ethical issues in companion-based AI technologies. These include risks from difficulty in separating AI-based roleplay from real life, unconstrained AI models performing edgy characters, reality detachment, and confusion by dishonest anthropomorphism and emulated empathy. All have implications for safety measures for vulnerable users. While acknowledging the potential benefits of AI companions, this paper argues for the urgent need for ethical frameworks that balance innovation with user safety. By proposing actionable recommendations for design and governance, the paper aims to guide industry, policymakers, and scholars in fostering safer and more responsible AI companion platforms.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6365 - 6377"},"PeriodicalIF":4.7,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02408-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-09DOI: 10.1007/s00146-025-02412-9
Eco Hamersma
Discussion of Chinese policies on digital governance, development, and surveillance in general, as well as the Social Credit System in particular, have been described using the terminology Digital Leninism. The purpose of this paper is to explicate the nature of this term to re-evaluate its foundational principles. Within the original context, Digital Leninism was coined in 2016 by Sebastian Heilmann of the Mercator Institute for China Studies to interpret Chinese digital policies within the environment of the authoritarian one-party policies of the Chinese Communist Party led by Chairman Xi Jinping. Since then, the term has become popular in academic discourse. In general, the definition of Digital Leninism which generally used in academic literature is one, where digital technology is focused on social governance while simultaneously maintaining a strong security perspective. Particularly within the frame of Xi’s administration utilizing cutting-edge digital technologies for algorithmic governance. It is, therefore, seemingly used exclusively in a Chinese context. However, although alluding to Leninist thought via the inclusion of its namesake in the two-word term, Heilmann’s original formulation lacks any Leninist ideological underpinning. In short, the Leninist connection in the original formulation is as basic as the combination of the Chinese state's Leninist background with authoritarian practices in cyberspace. We would, therefore, argue that Heilmann’s original formulation is simply another stand-in for the more broadly applicable term digital authoritarianism. Meanwhile our adjustment of Heilmann’s theory sets to universalise the notion out of its unnecessary Chinese context through the application of Lenin’s ideological worldview, specifically by looking at class consciousness as a fundamental pillar of digital governance within a digital Leninist system. In doing so we are able to provide a potential insight into the internal logic of the Chinese Communist Party in its endeavours to employ advanced digital monitoring, manipulation, and control. Simultaneously using this reformulated Digital Leninism to provide a better rationale for the development of Social Credit Systems in a Chinese environment as one example policy. To be sure, this paper is not attempting to issue a cause-all end-all argument for the development of Social Credit Systems as deriving from the revaluated notion of Digital Leninism. Instead, this endeavour aims to add depth, where before there was only a superficial framework by placing Digital Leninism within a line of policies implemented by a Leninist vanguard party to remain in control of a population which has not yet transitioned out of false consciousness. Occupying a new policy space, with an orthodox theoretical underpinning, at the intersection of the real world and cyberspace, a space created through the advancement of technology.
{"title":"Reformulating Digital Leninism: a response to Sebastian Heilmann’s notions on digital governance in China","authors":"Eco Hamersma","doi":"10.1007/s00146-025-02412-9","DOIUrl":"10.1007/s00146-025-02412-9","url":null,"abstract":"<div><p>Discussion of Chinese policies on digital governance, development, and surveillance in general, as well as the Social Credit System in particular, have been described using the terminology <i>Digital Leninism</i>. The purpose of this paper is to explicate the nature of this term to re-evaluate its foundational principles. Within the original context, Digital Leninism was coined in 2016 by Sebastian Heilmann of the Mercator Institute for China Studies to interpret Chinese digital policies within the environment of the authoritarian one-party policies of the Chinese Communist Party led by Chairman Xi Jinping. Since then, the term has become popular in academic discourse. In general, the definition of Digital Leninism which generally used in academic literature is one, where digital technology is focused on social governance while simultaneously maintaining a strong security perspective. Particularly within the frame of Xi’s administration utilizing cutting-edge digital technologies for algorithmic governance. It is, therefore, seemingly used exclusively in a Chinese context. However, although alluding to Leninist thought via the inclusion of its namesake in the two-word term, Heilmann’s original formulation lacks any Leninist ideological underpinning. In short, the Leninist connection in the original formulation is as basic as the combination of the Chinese state's Leninist background with authoritarian practices in cyberspace. We would, therefore, argue that Heilmann’s original formulation is simply another stand-in for the more broadly applicable term <i>digital authoritarianism</i>. Meanwhile our adjustment of Heilmann’s theory sets to universalise the notion out of its unnecessary Chinese context through the application of Lenin’s ideological worldview, specifically by looking at class consciousness as a fundamental pillar of digital governance within a digital Leninist system. In doing so we are able to provide a potential insight into the internal logic of the Chinese Communist Party in its endeavours to employ advanced digital monitoring, manipulation, and control. Simultaneously using this reformulated Digital Leninism to provide a better rationale for the development of Social Credit Systems in a Chinese environment as one example policy. To be sure, this paper is not attempting to issue a cause-all end-all argument for the development of Social Credit Systems as deriving from the revaluated notion of Digital Leninism. Instead, this endeavour aims to add depth, where before there was only a superficial framework by placing Digital Leninism within a line of policies implemented by a Leninist vanguard party to remain in control of a population which has not yet transitioned out of false consciousness. Occupying a new policy space, with an orthodox theoretical underpinning, at the intersection of the real world and cyberspace, a space created through the advancement of technology.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6357 - 6364"},"PeriodicalIF":4.7,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}