首页 > 最新文献

AI & Society最新文献

英文 中文
Generative AI and the avant-garde: bridging historical innovation with contemporary art 生成式人工智能与前卫艺术:将历史创新与当代艺术相结合
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-24 DOI: 10.1007/s00146-025-02410-x
Jurgis Peters

The adoption of generative AI technology in visual arts echoes the transformational process initiated by early 20th-century avant-garde movements such as Constructivism and Dadaism. By utilising technological advances of their time avant-garde artists redefine the role of an artist and what could be considered as artwork. Written from the perspective of an art practitioner and researcher, this paper explores how contemporary artists working with AI continue the radical and experimental spirit that characterised early avant-garde. The re-evaluation of artist roles from sole creators to engineers-collaborators and curators in an AI-mediated creative process underscores a shift in the artistic practice. Through detailed case studies of three contemporary artists, the paper illustrates how generative AI is not only used to create artwork but also to critique technological, cultural, and societal systems. Additionally, it addresses ethical concerns such as AI bias, data commodification, and the environmental impact of AI technologies, situating contemporary generative AI practices within the broader context of art's evolving societal role. Ultimately, the paper underscores the transformation of artistic practice in the digital age, where AI becomes both a creative tool and a subject of critical reflection.

在视觉艺术中采用生成式人工智能技术,与20世纪早期先锋运动(如构成主义和达达主义)发起的转型过程相呼应。前卫艺术家利用当时的技术进步,重新定义了艺术家的角色,以及什么可以被视为艺术作品。本文从艺术实践者和研究者的角度出发,探讨了当代艺术家如何与人工智能合作,延续早期前卫艺术的激进和实验精神。在人工智能介导的创作过程中,艺术家的角色从唯一的创作者重新评估为工程师、合作者和策展人,这突显了艺术实践的转变。通过对三位当代艺术家的详细案例研究,本文说明了生成式人工智能如何不仅用于创作艺术品,还用于批判技术、文化和社会系统。此外,它还解决了人工智能偏见、数据商品化和人工智能技术对环境的影响等伦理问题,将当代生成人工智能实践置于艺术不断发展的社会角色的更广泛背景下。最后,本文强调了数字时代艺术实践的转变,在这个时代,人工智能既是一种创造性工具,也是一种批判性反思的主题。
{"title":"Generative AI and the avant-garde: bridging historical innovation with contemporary art","authors":"Jurgis Peters","doi":"10.1007/s00146-025-02410-x","DOIUrl":"10.1007/s00146-025-02410-x","url":null,"abstract":"<div><p>The adoption of generative AI technology in visual arts echoes the transformational process initiated by early 20th-century avant-garde movements such as Constructivism and Dadaism. By utilising technological advances of their time avant-garde artists redefine the role of an artist and what could be considered as artwork. Written from the perspective of an art practitioner and researcher, this paper explores how contemporary artists working with AI continue the radical and experimental spirit that characterised early avant-garde. The re-evaluation of artist roles from sole creators to engineers-collaborators and curators in an AI-mediated creative process underscores a shift in the artistic practice. Through detailed case studies of three contemporary artists, the paper illustrates how generative AI is not only used to create artwork but also to critique technological, cultural, and societal systems. Additionally, it addresses ethical concerns such as AI bias, data commodification, and the environmental impact of AI technologies, situating contemporary generative AI practices within the broader context of art's evolving societal role. Ultimately, the paper underscores the transformation of artistic practice in the digital age, where AI becomes both a creative tool and a subject of critical reflection.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6407 - 6424"},"PeriodicalIF":4.7,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02410-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust in AI 信任人工智能
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-22 DOI: 10.1007/s00146-025-02429-0
Emma Dahlin
{"title":"Trust in AI","authors":"Emma Dahlin","doi":"10.1007/s00146-025-02429-0","DOIUrl":"10.1007/s00146-025-02429-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6089 - 6095"},"PeriodicalIF":4.7,"publicationDate":"2025-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02429-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moral disagreement and the limits of AI value alignment: a dual challenge of epistemic justification and political legitimacy 道德分歧和人工智能价值一致性的局限性:认知正当性和政治合法性的双重挑战
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-21 DOI: 10.1007/s00146-025-02427-2
Nick Schuster, Daniel Kilov

AI systems are increasingly in a position to have deep and systemic impacts on human wellbeing. Projects in value alignment, a critical area of AI safety research, must ultimately aim to ensure that all those who stand to be affected by such systems have good reason to accept their outputs. This is especially challenging where AI systems are involved in making morally controversial decisions. In this paper, we consider three current approaches to value alignment: crowdsourcing, reinforcement learning from human feedback, and constitutional AI. We argue that all three fail to accommodate reasonable moral disagreement, since they provide neither good epistemic reasons nor good political reasons for accepting AI systems’ morally controversial outputs. Since these appear to be the most promising approaches to value alignment currently on offer, we conclude that accommodating reasonable moral disagreement remains an open problem for AI safety, and we offer guidance for future research.

人工智能系统越来越有可能对人类福祉产生深刻而系统性的影响。价值一致性项目是人工智能安全研究的一个关键领域,其最终目标必须是确保所有可能受到此类系统影响的人都有充分的理由接受它们的产出。当人工智能系统参与做出道德上有争议的决定时,这尤其具有挑战性。在本文中,我们考虑了三种当前的价值一致性方法:众包、从人类反馈中强化学习和宪法人工智能。我们认为,这三者都无法容纳合理的道德分歧,因为它们既没有提供好的认知理由,也没有提供好的政治理由来接受人工智能系统在道德上有争议的产出。由于这些似乎是目前提供的最有希望的价值一致性方法,我们得出的结论是,容纳合理的道德分歧仍然是人工智能安全的一个开放问题,我们为未来的研究提供指导。
{"title":"Moral disagreement and the limits of AI value alignment: a dual challenge of epistemic justification and political legitimacy","authors":"Nick Schuster,&nbsp;Daniel Kilov","doi":"10.1007/s00146-025-02427-2","DOIUrl":"10.1007/s00146-025-02427-2","url":null,"abstract":"<div><p>AI systems are increasingly in a position to have deep and systemic impacts on human wellbeing. Projects in value alignment, a critical area of AI safety research, must ultimately aim to ensure that all those who stand to be affected by such systems have good reason to accept their outputs. This is especially challenging where AI systems are involved in making morally controversial decisions. In this paper, we consider three current approaches to value alignment: crowdsourcing, reinforcement learning from human feedback, and constitutional AI. We argue that all three fail to accommodate reasonable moral disagreement, since they provide neither good epistemic reasons nor good political reasons for accepting AI systems’ morally controversial outputs. Since these appear to be the most promising approaches to value alignment currently on offer, we conclude that accommodating reasonable moral disagreement remains an open problem for AI safety, and we offer guidance for future research.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6073 - 6087"},"PeriodicalIF":4.7,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02427-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Democratic legitimacy of AI in judicial decision-making 人工智能在司法决策中的民主合法性
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-19 DOI: 10.1007/s00146-025-02411-w
Anastasia Nefeli Vidaki, Vagelis Papakonstantinou

Concerns have been expressed regarding the impact of automation procedures and penetration of new technologies into the judicial field on fundamental rights, democratic values and the notion of legitimacy in general. There are particular risks posed to the legitimate judicial decision-making and the rights of the parties of court proceedings. This paper examines the complex relationship between the artificial intelligence (AI) and the democratic legitimacy of judicial decision-making. While AI systems have been introduced in various areas of public administration to support law application and public policy, their role in the judiciary raises distinct questions about the legitimacy of algorithmic influence on adjudication. Normally, traditional judicial legitimacy is grounded in principles of impartiality, transparency and reasoned justification, which AI systems challenge by potentially disrupting these core democratic tenets. There lies a possibility that biased algorithms will be deployed in justice. The judges and their impartial and independent thinking and reasoning will be crowded out and the judiciary will be gradually replaced by machines reaching a decision based on statistics rather than an individualized assessment. This, not that far-fetched scenario, seems menacing for the whole democratic structure and idea. This paper reviews theoretical perspectives on democratic legitimacy, focusing on the contrasting views of judicial authority as either an undemocratic imposition on political rights or as a consensual safeguard for fundamental rights within a democratic context. Unlike previous studies that examine the raised topics in isolation, this paper provides a comprehensive framework that evaluates the diverse degrees of AI automation and how they affect impartiality, publicity and reasoning. It goes further by exploring its possible threats to those aspects of democratic legitimacy and suggesting some possible solutions to counterbalance them. Despite the doubts over the compatibility between AI and democratic ideals, this paper contributes an innovative hybrid model for judicial decision-making that integrates human oversight with AI assistance, seeking to reconcile the benefits of AI with the need to uphold democratic principles within the judicial review process. This approach aims to fill a critical gap in the current literature by directly confronting challenges and opportunities presented by AI in judicial contexts, with a view to sustaining democratic values in a future where the role of AI in the judiciary is likely to expand.

有人对自动化程序和新技术渗透到司法领域对基本权利、民主价值和一般合法性概念的影响表示关注。这对合法的司法决策和法院诉讼当事人的权利构成了特别的风险。本文探讨了人工智能与司法决策的民主合法性之间的复杂关系。虽然人工智能系统已被引入公共行政的各个领域,以支持法律适用和公共政策,但它们在司法中的作用引发了关于算法对裁决影响合法性的明显问题。通常,传统的司法合法性建立在公正、透明和合理辩护的原则基础上,而人工智能系统可能会破坏这些核心民主原则,从而对这些原则构成挑战。有偏见的算法可能会在司法领域得到应用。法官和他们公正、独立的思考和推理将被挤出,司法机构将逐渐被基于统计数据而不是个人评估做出决定的机器所取代。这种情况并非那么牵强,但似乎对整个民主结构和理念构成了威胁。本文回顾了关于民主合法性的理论观点,重点关注司法权威的不同观点,即司法权威是对政治权利的非民主强加,还是民主背景下对基本权利的共识保障。与以往的研究不同,本文提供了一个全面的框架,评估人工智能自动化的不同程度,以及它们如何影响公正性、公共性和推理。它进一步探讨了它对民主合法性的这些方面可能构成的威胁,并提出了一些可能的解决办法来抵消这些威胁。尽管对人工智能与民主理想之间的兼容性存在质疑,但本文为司法决策提供了一个创新的混合模型,该模型将人类监督与人工智能协助相结合,寻求在司法审查过程中协调人工智能的好处与维护民主原则的需要。这种方法旨在通过直接面对人工智能在司法背景下带来的挑战和机遇来填补当前文献中的一个关键空白,以期在人工智能在司法领域的作用可能扩大的未来维持民主价值观。
{"title":"Democratic legitimacy of AI in judicial decision-making","authors":"Anastasia Nefeli Vidaki,&nbsp;Vagelis Papakonstantinou","doi":"10.1007/s00146-025-02411-w","DOIUrl":"10.1007/s00146-025-02411-w","url":null,"abstract":"<div><p>Concerns have been expressed regarding the impact of automation procedures and penetration of new technologies into the judicial field on fundamental rights, democratic values and the notion of legitimacy in general. There are particular risks posed to the legitimate judicial decision-making and the rights of the parties of court proceedings. This paper examines the complex relationship between the artificial intelligence (AI) and the democratic legitimacy of judicial decision-making. While AI systems have been introduced in various areas of public administration to support law application and public policy, their role in the judiciary raises distinct questions about the legitimacy of algorithmic influence on adjudication. Normally, traditional judicial legitimacy is grounded in principles of impartiality, transparency and reasoned justification, which AI systems challenge by potentially disrupting these core democratic tenets. There lies a possibility that biased algorithms will be deployed in justice. The judges and their impartial and independent thinking and reasoning will be crowded out and the judiciary will be gradually replaced by machines reaching a decision based on statistics rather than an individualized assessment. This, not that far-fetched scenario, seems menacing for the whole democratic structure and idea. This paper reviews theoretical perspectives on democratic legitimacy, focusing on the contrasting views of judicial authority as either an undemocratic imposition on political rights or as a consensual safeguard for fundamental rights within a democratic context. Unlike previous studies that examine the raised topics in isolation, this paper provides a comprehensive framework that evaluates the diverse degrees of AI automation and how they affect impartiality, publicity and reasoning. It goes further by exploring its possible threats to those aspects of democratic legitimacy and suggesting some possible solutions to counterbalance them. Despite the doubts over the compatibility between AI and democratic ideals, this paper contributes an innovative hybrid model for judicial decision-making that integrates human oversight with AI assistance, seeking to reconcile the benefits of AI with the need to uphold democratic principles within the judicial review process. This approach aims to fill a critical gap in the current literature by directly confronting challenges and opportunities presented by AI in judicial contexts, with a view to sustaining democratic values in a future where the role of AI in the judiciary is likely to expand.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6025 - 6035"},"PeriodicalIF":4.7,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint journeys: the linguistic domestication of smart speakers and their users in interaction 联合旅程:智能音箱及其用户在交互中的语言驯化
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-14 DOI: 10.1007/s00146-025-02384-w
Tim Hector

This article develops the concept of joint journeys as a metaphor to analyze how smart speakers become embedded in everyday domestic life and to trace the reciprocal, linguistically-mediated processes of domestication. While the domestication framework is well established in media studies, AI-based, networked technologies like smart speakers challenge its underlying assumptions by connecting private households to global infrastructures, thereby blurring boundaries between the public and the private. Drawing on video and audio recordings from German households, the article explores how conversational linguistic practices contribute to the domestication of smart speakers. Using methods from ethnomethodological conversation analysis and interactional linguistics, the study traces how smart speakers become integrated into everyday life, not just materially and functionally but also discursively, through practices relating to placement decisions, adaptation to sequential structures, personalization features, and reactions to malfunction. The article shows that mutual accommodation takes place: while users adapt their language to interface constraints, devices also get ‘personalized’ towards their users. The metaphor of joint journeys emphasizes that the co-evolution of users and devices is an ongoing, non-linear expedition shaped by language, socio-material environments, and infrastructural logics. These observations make it clear that it is through practices and language that AI technologies become integrated into everyday culture, which also raises questions about the broader datafied ecosystems to which interactions with them contribute.

本文发展了联合旅程的概念作为隐喻来分析智能音箱如何融入日常家庭生活,并追溯了相互的、语言中介的驯化过程。虽然驯化框架在媒体研究中已经很好地建立起来,但基于人工智能的网络技术,如智能扬声器,通过将私人家庭与全球基础设施连接起来,从而模糊了公共和私人之间的界限,挑战了其潜在的假设。本文利用德国家庭的视频和音频记录,探讨了会话语言实践如何有助于智能扬声器的驯化。使用民族方法学对话分析和互动语言学的方法,该研究通过与放置决策、对顺序结构的适应、个性化特征和故障反应有关的实践,追踪智能扬声器如何融入日常生活,不仅在物质和功能上,而且在话语上。这篇文章显示了相互适应的发生:当用户根据界面约束调整自己的语言时,设备也会对用户进行“个性化”。联合旅程的比喻强调了用户和设备的共同进化是一场持续的、非线性的探险,由语言、社会物质环境和基础设施逻辑塑造。这些观察结果清楚地表明,人工智能技术是通过实践和语言融入日常文化的,这也引发了人们对更广泛的数据化生态系统的质疑,而与人工智能技术的互动对这些生态系统的贡献是什么。
{"title":"Joint journeys: the linguistic domestication of smart speakers and their users in interaction","authors":"Tim Hector","doi":"10.1007/s00146-025-02384-w","DOIUrl":"10.1007/s00146-025-02384-w","url":null,"abstract":"<div><p>This article develops the concept of <i>joint journeys</i> as a metaphor to analyze how smart speakers become embedded in everyday domestic life and to trace the reciprocal, linguistically-mediated processes of domestication. While the domestication framework is well established in media studies, AI-based, networked technologies like smart speakers challenge its underlying assumptions by connecting private households to global infrastructures, thereby blurring boundaries between the public and the private. Drawing on video and audio recordings from German households, the article explores how conversational linguistic practices contribute to the domestication of smart speakers. Using methods from ethnomethodological conversation analysis and interactional linguistics, the study traces how smart speakers become integrated into everyday life, not just materially and functionally but also discursively, through practices relating to placement decisions, adaptation to sequential structures, personalization features, and reactions to malfunction. The article shows that mutual accommodation takes place: while users adapt their language to interface constraints, devices also get ‘personalized’ towards their users. The metaphor of joint journeys emphasizes that the co-evolution of users and devices is an ongoing, non-linear expedition shaped by language, socio-material environments, and infrastructural logics. These observations make it clear that it is through practices and language that AI technologies become integrated into everyday culture, which also raises questions about the broader datafied ecosystems to which interactions with them contribute.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6037 - 6057"},"PeriodicalIF":4.7,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02384-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The material making of language as practice of global domination and control: continuations from European colonialism to AI 作为全球统治和控制实践的语言材料制造:从欧洲殖民主义到人工智能的延续
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-14 DOI: 10.1007/s00146-025-02389-5
Bettina Migge, Britta Schneider

Although AI language technologies are typically presented as future-oriented technological innovation, none of the elements of machine learning technologies are unaffected by the cultural and historical contexts of their emergence. This is particularly true in the case of language constructions and the materialization of language in AI. Examination of computational language culture reveals striking continuities to concepts of language and their materialization in technology settings in the history of European colonialism. Based on an in-depth analysis of how languages were materially produced in colonialism and are treated in AI technologies, we show the strong colonial continuities in language materialization processes to this day. This also indicates the crucial role that language materializations play in the construction and maintenance of power and social order in a global realm.

尽管人工智能语言技术通常被认为是面向未来的技术创新,但机器学习技术的所有要素都不受其出现的文化和历史背景的影响。这在人工智能中的语言结构和语言物化的情况下尤其如此。对计算语言文化的考察揭示了语言概念的惊人连续性及其在欧洲殖民主义历史上的技术设置中的物化。通过深入分析语言是如何在殖民主义中产生的,以及在人工智能技术中如何被处理的,我们展示了语言物质化过程中强大的殖民连续性,直到今天。这也表明了语言物化在全球范围内对权力和社会秩序的建构和维护所起的关键作用。
{"title":"The material making of language as practice of global domination and control: continuations from European colonialism to AI","authors":"Bettina Migge,&nbsp;Britta Schneider","doi":"10.1007/s00146-025-02389-5","DOIUrl":"10.1007/s00146-025-02389-5","url":null,"abstract":"<div><p>Although AI language technologies are typically presented as future-oriented technological innovation, none of the elements of machine learning technologies are unaffected by the cultural and historical contexts of their emergence. This is particularly true in the case of language constructions and the materialization of language in AI. Examination of computational language culture reveals striking continuities to concepts of language and their materialization in technology settings in the history of European colonialism. Based on an in-depth analysis of how languages were materially produced in colonialism and are treated in AI technologies, we show the strong colonial continuities in language materialization processes to this day. This also indicates the crucial role that language materializations play in the construction and maintenance of power and social order in a global realm.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6059 - 6071"},"PeriodicalIF":4.7,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02389-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The prediction of non-ergodic humanity by artificial intelligence 人工智能对非遍历人类的预测
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-13 DOI: 10.1007/s00146-025-02393-9
Peter Stewart

This article aims to affirm and instantiate the main accounts showing intrinsic limitations of artificial intelligence computing in a real world of organisms, people and speech. It is argued that these limits mainly concern non-ergodic (or non-repeating) phenomena. This paper aims to extend the debate on the limits of AI through a preliminary examination of the dispersion of both regularities and non-ergodic phenomena and processes in both society and human persons. It is argued that regularities and non-ergodic processes are deeply intertwined. Social regularity, for example from the built environment and conformity, is discussed. In society, non-ergodicity is especially found in the lifeworld of speech and intersubjectivity. The human person creates non-ergodicity through numerous routes. Individual regularities are seen in things such as habit and routine. This study asserts that human intersubjective life in the often nonergodic lifeworld and inbuilt non-repeating dimensions of an individual’s living out of the world, should be recognized as extensive areas where AI prediction will be weak. It is hypothesized that the intensity of non-ergodicity in phenomena is a firm indicator of weak AI prediction, and that most successful AI prediction of social phenomena predominantly reflects the sort of social regularities discussed in this article.

本文旨在确认和实例化显示人工智能计算在生物体,人类和语言的现实世界中的内在局限性的主要帐户。有人认为,这些限制主要涉及非遍历(或非重复)现象。本文旨在通过对社会和人类中规律和非遍历现象和过程的分散的初步检查,扩展关于人工智能局限性的辩论。有人认为,规律性和非遍历过程是紧密交织在一起的。社会规律,例如从建筑环境和整合,进行了讨论。在社会中,非遍历性尤其存在于言语和主体间性的生活世界中。人类通过许多途径创造了非遍历性。个体的规律性体现在习惯和例行公事中。这项研究断言,人类主体间生活在通常非遍历的生活世界中,以及个人生活在世界之外的内在非重复维度,应该被视为人工智能预测薄弱的广泛领域。假设现象的非遍历性强度是弱人工智能预测的一个可靠指标,并且最成功的人工智能对社会现象的预测主要反映了本文中讨论的那种社会规律。
{"title":"The prediction of non-ergodic humanity by artificial intelligence","authors":"Peter Stewart","doi":"10.1007/s00146-025-02393-9","DOIUrl":"10.1007/s00146-025-02393-9","url":null,"abstract":"<div><p>This article aims to affirm and instantiate the main accounts showing intrinsic limitations of artificial intelligence computing in a real world of organisms, people and speech. It is argued that these limits mainly concern non-ergodic (or non-repeating) phenomena. This paper aims to extend the debate on the limits of AI through a preliminary examination of the dispersion of both regularities and non-ergodic phenomena and processes in both society and human persons. It is argued that regularities and non-ergodic processes are deeply intertwined. Social regularity, for example from the built environment and conformity, is discussed. In society, non-ergodicity is especially found in the lifeworld of speech and intersubjectivity. The human person creates non-ergodicity through numerous routes. Individual regularities are seen in things such as habit and routine. This study asserts that human intersubjective life in the often nonergodic lifeworld and inbuilt non-repeating dimensions of an individual’s living out of the world, should be recognized as extensive areas where AI prediction will be weak. It is hypothesized that the intensity of non-ergodicity in phenomena is a firm indicator of weak AI prediction, and that most successful AI prediction of social phenomena predominantly reflects the sort of social regularities discussed in this article.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5999 - 6010"},"PeriodicalIF":4.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02393-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Every wave carries a sense of déjà vu: revisiting the computerization movement perspective to understand the recent push towards artificial intelligence 每一波浪潮都带着一种幻化感:重新审视计算机化运动的视角,以理解最近推动人工智能的趋势
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-13 DOI: 10.1007/s00146-025-02402-x
Xiaoyao Han, Oskar J. Gstrein, Vasilios Andrikopoulos, Ronald Stolk

Analyzed through the lens of the “computerization movement” (CM), the development of revolutionary technologies has consistently followed a recurring trajectory in terms of the origin, momentum, diffusion, and societal impact. Building on the analysis of selected historical trajectories, similar dynamics are discernible for the recent push towards the adoption of artificial intelligence (AI), being enhanced by capabilities provided by Big Data infrastructure. This paper explores Big Data and AI within the framework of CMs, analyzing their driving visions, trajectories, interconnectedness, and the societal discourses formed around their adoption. By drawing parallels with selected past CMs and situating current events within such historical context, this study provides a novel perspective hopefully facilitating a better understanding of the current technological landscape, and aiding in the navigation of the complex interplay between innovation, social change, and human expectations. The study shows that even if technological innovations remain central for the recent push towards AI adoption, shared beliefs and visionary ideals underpinning adoption are equally influential. These beliefs and ideals have continually mobilized people around the relevance of AI—in the past and today—even as the supporting infrastructure, core technologies, and their relevance for society have evolved.

从“计算机化运动”(CM)的角度分析,革命性技术的发展在起源、动力、扩散和社会影响方面一直遵循着一个反复出现的轨迹。在对选定的历史轨迹进行分析的基础上,类似的动态可以在最近推动采用人工智能(AI)的过程中看出,大数据基础设施提供的能力增强了人工智能(AI)的应用。本文探讨了CMs框架下的大数据和人工智能,分析了它们的驱动愿景、轨迹、相互联系以及围绕它们的采用形成的社会话语。本研究通过与选定的过去的CMs进行比较,并将当前事件置于这样的历史背景下,提供了一个新的视角,希望有助于更好地理解当前的技术景观,并帮助导航创新,社会变革和人类期望之间复杂的相互作用。该研究表明,即使技术创新仍然是推动人工智能采用的核心因素,支撑人工智能采用的共同信念和有远见的理想同样具有影响力。这些信念和理想在过去和现在都不断地动员人们关注人工智能的相关性,即使是在支持基础设施、核心技术及其与社会的相关性不断发展的情况下。
{"title":"Every wave carries a sense of déjà vu: revisiting the computerization movement perspective to understand the recent push towards artificial intelligence","authors":"Xiaoyao Han,&nbsp;Oskar J. Gstrein,&nbsp;Vasilios Andrikopoulos,&nbsp;Ronald Stolk","doi":"10.1007/s00146-025-02402-x","DOIUrl":"10.1007/s00146-025-02402-x","url":null,"abstract":"<div><p>Analyzed through the lens of the “computerization movement” (CM), the development of revolutionary technologies has consistently followed a recurring trajectory in terms of the origin, momentum, diffusion, and societal impact. Building on the analysis of selected historical trajectories, similar dynamics are discernible for the recent push towards the adoption of artificial intelligence (AI), being enhanced by capabilities provided by Big Data infrastructure. This paper explores Big Data and AI within the framework of CMs, analyzing their driving visions, trajectories, interconnectedness, and the societal discourses formed around their adoption. By drawing parallels with selected past CMs and situating current events within such historical context, this study provides a novel perspective hopefully facilitating a better understanding of the current technological landscape, and aiding in the navigation of the complex interplay between innovation, social change, and human expectations. The study shows that even if technological innovations remain central for the recent push towards AI adoption, shared beliefs and visionary ideals underpinning adoption are equally influential. These beliefs and ideals have continually mobilized people around the relevance of AI—in the past and today—even as the supporting infrastructure, core technologies, and their relevance for society have evolved. </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6379 - 6391"},"PeriodicalIF":4.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02402-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Move fast and break people? Ethics, companion apps, and the case of Character.ai 快速行动,突破对手?伦理,配套应用,以及Character.ai的案例
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-10 DOI: 10.1007/s00146-025-02408-5
Vian Bakir, Andrew McStay

Riffing off move fast and break things, the internal motto coined by Meta’s Mark Zuckerberg, this paper examines the ethical dimensions of human relationships with AI companions, focusing on Character.ai—a platform where users interact with AI-generated ‘characters’ ranging from fictional figures to representations of real people. Drawing on an assessment of the platform’s design, and the first civil lawsuit brought against Character.ai in the USA in 2024 following the suicide of a teenage user, this paper identifies unresolved ethical issues in companion-based AI technologies. These include risks from difficulty in separating AI-based roleplay from real life, unconstrained AI models performing edgy characters, reality detachment, and confusion by dishonest anthropomorphism and emulated empathy. All have implications for safety measures for vulnerable users. While acknowledging the potential benefits of AI companions, this paper argues for the urgent need for ethical frameworks that balance innovation with user safety. By proposing actionable recommendations for design and governance, the paper aims to guide industry, policymakers, and scholars in fostering safer and more responsible AI companion platforms.

引用Meta的马克·扎克伯格(Mark Zuckerberg)提出的“快速行动和打破现状”(move fast and break things)的内部格言,本文考察了人类与人工智能伙伴关系的伦理维度,重点是性格。人工智能——用户与人工智能生成的“角色”互动的平台,从虚构的人物到真人的代表。根据对该平台设计的评估,以及对Character提起的第一起民事诉讼。在一名青少年用户自杀后,本文确定了基于同伴的人工智能技术中尚未解决的伦理问题。这些风险包括难以将基于人工智能的角色扮演与现实生活分离,不受约束的人工智能模型执行尖锐角色,现实脱离,以及由不诚实的拟人化和模拟同理心造成的混乱。所有这些都对脆弱用户的安全措施有影响。在承认人工智能伴侣的潜在好处的同时,本文认为迫切需要一个平衡创新与用户安全的伦理框架。通过为设计和治理提出可操作的建议,本文旨在指导行业、政策制定者和学者培育更安全、更负责任的人工智能伴侣平台。
{"title":"Move fast and break people? Ethics, companion apps, and the case of Character.ai","authors":"Vian Bakir,&nbsp;Andrew McStay","doi":"10.1007/s00146-025-02408-5","DOIUrl":"10.1007/s00146-025-02408-5","url":null,"abstract":"<div><p>Riffing off <i>move fast and break things</i>, the internal motto coined by Meta’s Mark Zuckerberg, this paper examines the ethical dimensions of human relationships with AI companions, focusing on Character.ai—a platform where users interact with AI-generated ‘characters’ ranging from fictional figures to representations of real people. Drawing on an assessment of the platform’s design, and the first civil lawsuit brought against Character.ai in the USA in 2024 following the suicide of a teenage user, this paper identifies unresolved ethical issues in companion-based AI technologies. These include risks from difficulty in separating AI-based roleplay from real life, unconstrained AI models performing edgy characters, reality detachment, and confusion by dishonest anthropomorphism and emulated empathy. All have implications for safety measures for vulnerable users. While acknowledging the potential benefits of AI companions, this paper argues for the urgent need for ethical frameworks that balance innovation with user safety. By proposing actionable recommendations for design and governance, the paper aims to guide industry, policymakers, and scholars in fostering safer and more responsible AI companion platforms.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6365 - 6377"},"PeriodicalIF":4.7,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02408-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reformulating Digital Leninism: a response to Sebastian Heilmann’s notions on digital governance in China 重塑数字列宁主义:对韩博天关于中国数字治理理念的回应
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-09 DOI: 10.1007/s00146-025-02412-9
Eco Hamersma

Discussion of Chinese policies on digital governance, development, and surveillance in general, as well as the Social Credit System in particular, have been described using the terminology Digital Leninism. The purpose of this paper is to explicate the nature of this term to re-evaluate its foundational principles. Within the original context, Digital Leninism was coined in 2016 by Sebastian Heilmann of the Mercator Institute for China Studies to interpret Chinese digital policies within the environment of the authoritarian one-party policies of the Chinese Communist Party led by Chairman Xi Jinping. Since then, the term has become popular in academic discourse. In general, the definition of Digital Leninism which generally used in academic literature is one, where digital technology is focused on social governance while simultaneously maintaining a strong security perspective. Particularly within the frame of Xi’s administration utilizing cutting-edge digital technologies for algorithmic governance. It is, therefore, seemingly used exclusively in a Chinese context. However, although alluding to Leninist thought via the inclusion of its namesake in the two-word term, Heilmann’s original formulation lacks any Leninist ideological underpinning. In short, the Leninist connection in the original formulation is as basic as the combination of the Chinese state's Leninist background with authoritarian practices in cyberspace. We would, therefore, argue that Heilmann’s original formulation is simply another stand-in for the more broadly applicable term digital authoritarianism. Meanwhile our adjustment of Heilmann’s theory sets to universalise the notion out of its unnecessary Chinese context through the application of Lenin’s ideological worldview, specifically by looking at class consciousness as a fundamental pillar of digital governance within a digital Leninist system. In doing so we are able to provide a potential insight into the internal logic of the Chinese Communist Party in its endeavours to employ advanced digital monitoring, manipulation, and control. Simultaneously using this reformulated Digital Leninism to provide a better rationale for the development of Social Credit Systems in a Chinese environment as one example policy. To be sure, this paper is not attempting to issue a cause-all end-all argument for the development of Social Credit Systems as deriving from the revaluated notion of Digital Leninism. Instead, this endeavour aims to add depth, where before there was only a superficial framework by placing Digital Leninism within a line of policies implemented by a Leninist vanguard party to remain in control of a population which has not yet transitioned out of false consciousness. Occupying a new policy space, with an orthodox theoretical underpinning, at the intersection of the real world and cyberspace, a space created through the advancement of technology.

关于中国在数字治理、发展和监督方面的政策,特别是社会信用体系的讨论,已经使用术语“数字列宁主义”进行了描述。本文的目的是阐明这一术语的本质,重新评价其基本原则。​从那时起,这个词在学术话语中变得流行起来。总的来说,学术文献中普遍使用的数字列宁主义的定义是,数字技术注重社会治理,同时保持强大的安全视角。​因此,它似乎只在中国的语境中使用。然而,尽管通过将其名字包含在两个词的术语中暗指列宁主义思想,但Heilmann的原始表述缺乏任何列宁主义的意识形态基础。简而言之,原始表述中的列宁主义联系与中国国家列宁主义背景与网络空间威权实践的结合一样基本。因此,我们认为,韩博曼的原始表述只是更广泛适用的术语“数字威权主义”的另一种说法。与此同时,我们对韩博天的理论进行了调整,通过应用列宁的意识形态世界观,将这一概念从不必要的中国背景中普遍化,特别是通过将阶级意识视为数字列宁主义体系中数字治理的基本支柱。通过这样做,我们能够对中国共产党努力采用先进的数字监控、操纵和控制的内在逻辑提供潜在的洞察。同时,利用这种重新表述的数字列宁主义,为中国环境下社会信用体系的发展提供了一个更好的理论基础,作为一个范例政策。可以肯定的是,本文并不是试图为社会信用体系的发展提出一个源于重估的数字列宁主义概念的“一切皆因”的论点。相反,这一努力的目的是增加深度,之前只有一个肤浅的框架,通过将数字列宁主义置于列宁主义先锋党实施的一系列政策中,以保持对尚未从虚假意识中过渡出来的人口的控制。在现实世界和网络空间的交叉点,一个通过技术进步创造的空间,占据了一个新的政策空间,具有正统的理论基础。
{"title":"Reformulating Digital Leninism: a response to Sebastian Heilmann’s notions on digital governance in China","authors":"Eco Hamersma","doi":"10.1007/s00146-025-02412-9","DOIUrl":"10.1007/s00146-025-02412-9","url":null,"abstract":"<div><p>Discussion of Chinese policies on digital governance, development, and surveillance in general, as well as the Social Credit System in particular, have been described using the terminology <i>Digital Leninism</i>. The purpose of this paper is to explicate the nature of this term to re-evaluate its foundational principles. Within the original context, Digital Leninism was coined in 2016 by Sebastian Heilmann of the Mercator Institute for China Studies to interpret Chinese digital policies within the environment of the authoritarian one-party policies of the Chinese Communist Party led by Chairman Xi Jinping. Since then, the term has become popular in academic discourse. In general, the definition of Digital Leninism which generally used in academic literature is one, where digital technology is focused on social governance while simultaneously maintaining a strong security perspective. Particularly within the frame of Xi’s administration utilizing cutting-edge digital technologies for algorithmic governance. It is, therefore, seemingly used exclusively in a Chinese context. However, although alluding to Leninist thought via the inclusion of its namesake in the two-word term, Heilmann’s original formulation lacks any Leninist ideological underpinning. In short, the Leninist connection in the original formulation is as basic as the combination of the Chinese state's Leninist background with authoritarian practices in cyberspace. We would, therefore, argue that Heilmann’s original formulation is simply another stand-in for the more broadly applicable term <i>digital authoritarianism</i>. Meanwhile our adjustment of Heilmann’s theory sets to universalise the notion out of its unnecessary Chinese context through the application of Lenin’s ideological worldview, specifically by looking at class consciousness as a fundamental pillar of digital governance within a digital Leninist system. In doing so we are able to provide a potential insight into the internal logic of the Chinese Communist Party in its endeavours to employ advanced digital monitoring, manipulation, and control. Simultaneously using this reformulated Digital Leninism to provide a better rationale for the development of Social Credit Systems in a Chinese environment as one example policy. To be sure, this paper is not attempting to issue a cause-all end-all argument for the development of Social Credit Systems as deriving from the revaluated notion of Digital Leninism. Instead, this endeavour aims to add depth, where before there was only a superficial framework by placing Digital Leninism within a line of policies implemented by a Leninist vanguard party to remain in control of a population which has not yet transitioned out of false consciousness. Occupying a new policy space, with an orthodox theoretical underpinning, at the intersection of the real world and cyberspace, a space created through the advancement of technology.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6357 - 6364"},"PeriodicalIF":4.7,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1