首页 > 最新文献

AI & Society最新文献

英文 中文
When AI meets AI: analyzing AI bills using AI 当AI遇到AI:使用AI分析AI账单
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-28 DOI: 10.1007/s00146-025-02466-9
Heonuk Ha

With the rapid advancement of Artificial Intelligence (AI) technology and its pervasive integration into society, governments worldwide have introduced a range of AI-related policies. In the United States, the use of AI technology has surged significantly since 2021, driven by the emergence of generative AI and its transformative potential. In response, the U.S. Congress has proposed numerous AI-related bills, reflecting growing legislative engagement with AI governance. This study examines 204 AI-related bills introduced during the 117th and 118th Congresses (2021–2024) through computational text analysis, employing topic modeling to identify recurring legislative themes and sentiment analysis to assess congressional attitudes toward AI policies. The findings reveal distinct variations in legislative focus and tone across chambers and political parties, offering a nuanced understanding of how AI-related issues are framed within U.S. policymaking. In addition, the results highlight how AI is connected to broader opportunities and concerns, including national security, technological innovation, and public service delivery. By applying machine learning techniques to legislative texts, this research provides a systematic and scalable approach to understanding AI policymaking. The study contributes to broader discussions on the partisan and institutional dynamics shaping AI legislation in the United States, offering insights into how emerging technologies are shaped by legislative priorities, regulatory attitudes, and broader political contexts.

随着人工智能(AI)技术的快速发展及其与社会的广泛融合,世界各国政府出台了一系列与人工智能相关的政策。在美国,自2021年以来,在生成式人工智能及其变革潜力的推动下,人工智能技术的使用大幅增加。作为回应,美国国会提出了许多与人工智能相关的法案,反映出越来越多的立法机构参与人工智能治理。本研究通过计算文本分析、主题建模来识别反复出现的立法主题和情感分析来评估国会对人工智能政策的态度,研究了第117届和第118届国会(2021-2024年)提出的204项与人工智能相关的法案。调查结果揭示了不同议院和政党在立法重点和基调上的明显差异,从而对美国政策制定中与人工智能相关的问题有了细致入微的了解。此外,研究结果强调了人工智能如何与更广泛的机会和问题联系起来,包括国家安全、技术创新和公共服务提供。通过将机器学习技术应用于立法文本,本研究为理解人工智能决策提供了一种系统的、可扩展的方法。该研究有助于更广泛地讨论影响美国人工智能立法的党派和制度动态,并深入了解立法优先事项、监管态度和更广泛的政治背景如何影响新兴技术。
{"title":"When AI meets AI: analyzing AI bills using AI","authors":"Heonuk Ha","doi":"10.1007/s00146-025-02466-9","DOIUrl":"10.1007/s00146-025-02466-9","url":null,"abstract":"<div><p>With the rapid advancement of Artificial Intelligence (AI) technology and its pervasive integration into society, governments worldwide have introduced a range of AI-related policies. In the United States, the use of AI technology has surged significantly since 2021, driven by the emergence of generative AI and its transformative potential. In response, the U.S. Congress has proposed numerous AI-related bills, reflecting growing legislative engagement with AI governance. This study examines 204 AI-related bills introduced during the 117th and 118th Congresses (2021–2024) through computational text analysis, employing topic modeling to identify recurring legislative themes and sentiment analysis to assess congressional attitudes toward AI policies. The findings reveal distinct variations in legislative focus and tone across chambers and political parties, offering a nuanced understanding of how AI-related issues are framed within U.S. policymaking. In addition, the results highlight how AI is connected to broader opportunities and concerns, including national security, technological innovation, and public service delivery. By applying machine learning techniques to legislative texts, this research provides a systematic and scalable approach to understanding AI policymaking. The study contributes to broader discussions on the partisan and institutional dynamics shaping AI legislation in the United States, offering insights into how emerging technologies are shaped by legislative priorities, regulatory attitudes, and broader political contexts.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"377 - 402"},"PeriodicalIF":4.7,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02466-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI preference prediction and policy making 人工智能偏好预测与政策制定
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-23 DOI: 10.1007/s00146-025-02474-9
James Edgar Lim, Julian Savulescu

Democratic decision-making is difficult. Representatives often fail to represent the preferences of their constituents, and directly consulting members of the public can be costly. Inspired by these difficulties, several scholars have discussed the use of artificial intelligence (AI) models to support democratic decision-making. One such particular application is the use of AI to represent public policy preferences by predicting them. In this paper, we perform an analysis on the different ways AI models can be used to represent public policy preferences. We make distinctions between using AI as epistemic tools and for procedure; group and individual predictions; and predictions about preferences and inferences about values. We also describe how AI models can help policymakers screen policies for potential worries and objections, double-check any beliefs they have about the acceptability of their policies, and justify policy proposals. We also consider a number of worries about the use of AI in policymaking. We argue that these worries, while legitimate, can be mitigated or avoided in the way we have proposed the use of AI.

民主决策是困难的。议员们往往不能代表选民的偏好,而直接咨询公众的成本可能很高。受到这些困难的启发,一些学者讨论了使用人工智能(AI)模型来支持民主决策。其中一个特殊的应用是使用人工智能通过预测来表示公共政策偏好。在本文中,我们对人工智能模型用于表示公共政策偏好的不同方式进行了分析。我们将人工智能作为认知工具和程序进行区分;群体和个人预测;以及对偏好的预测和对价值观的推断。我们还描述了人工智能模型如何帮助政策制定者筛选潜在的担忧和反对意见,仔细检查他们对政策可接受性的任何信念,并为政策建议辩护。我们还考虑了在政策制定中使用人工智能的一些担忧。我们认为,这些担忧虽然合理,但可以通过我们提出的使用人工智能的方式来减轻或避免。
{"title":"AI preference prediction and policy making","authors":"James Edgar Lim,&nbsp;Julian Savulescu","doi":"10.1007/s00146-025-02474-9","DOIUrl":"10.1007/s00146-025-02474-9","url":null,"abstract":"<div><p>Democratic decision-making is difficult. Representatives often fail to represent the preferences of their constituents, and directly consulting members of the public can be costly. Inspired by these difficulties, several scholars have discussed the use of artificial intelligence (AI) models to support democratic decision-making. One such particular application is the use of AI to represent public policy preferences by predicting them. In this paper, we perform an analysis on the different ways AI models can be used to represent public policy preferences. We make distinctions between using AI as epistemic tools and for procedure; group and individual predictions; and predictions about preferences and inferences about values. We also describe how AI models can help policymakers screen policies for potential worries and objections, double-check any beliefs they have about the acceptability of their policies, and justify policy proposals. We also consider a number of worries about the use of AI in policymaking. We argue that these worries, while legitimate, can be mitigated or avoided in the way we have proposed the use of AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"135 - 149"},"PeriodicalIF":4.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02474-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From meaning to emotions: LLMs as artificial communication partners 从意义到情感:法学硕士作为人工交流伙伴
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-22 DOI: 10.1007/s00146-025-02481-w
Jorge Luis Morton

Since its public release in late 2022, ChatGPT has drawn global attention for its ability to simulate conversation, assist with complex tasks, and generate fluent, human-like text. While much of the debate has focused on issues such as privacy, bias, and automation, the emotional dimension of interacting with such systems remains underexplored. This essay argues that large language models (LLMs) function not only as tools for meaning-making but also as artificial communication partners with affective presence. Drawing on Elena Esposito’s extension of Niklas Luhmann’s systems theory, it reframes communication as a process of selection—utterance, understanding, and response—rather than one of transmission. From this perspective, LLMs are not mere sources of information but interlocutors that participate in emotional resonance, where understanding can transform into feeling. Their outputs do not arise in isolation; rather, they are shaped by layers of human expression embedded in training data and filtered through specific socio technical and socio-affective contexts. These dynamics give rise to phenomena such as AI-driven companionship, digital mourning, and emotional simulation, all of which challenge conventional boundaries between human and non-human agents. LLMs thus emerge as quasi-others—entities capable of eliciting genuine emotional responses despite lacking consciousness or inner life. This condition invites critical reflection on emotional dependency, the aesthetics of authenticity, and the commodification of affect. Overlooking these emotional architectures risks flattening the social and ethical stakes of artificial communication and obscures the ways in which LLMs are reshaping the affective fabric of contemporary life through interpretations of utterances that may evoke emotional responses and foster affective attachments.

自2022年底公开发布以来,ChatGPT因其模拟对话、协助完成复杂任务和生成流利的、类似人类的文本的能力而引起了全球的关注。虽然大部分争论都集中在隐私、偏见和自动化等问题上,但与这些系统互动的情感维度仍未得到充分探讨。本文认为,大型语言模型(llm)不仅可以作为意义生成的工具,还可以作为具有情感存在的人工交流伙伴。它借鉴了埃斯波西托对卢曼系统理论的延伸,将沟通重新定义为一个选择的过程——话语、理解和回应——而不是一个传递的过程。从这个角度来看,法学硕士不仅仅是信息的来源,而且是参与情感共鸣的对话者,在那里理解可以转化为感受。它们的产出不是孤立产生的;相反,它们是由嵌入在训练数据中的人类表达层塑造的,并通过特定的社会技术和社会情感环境进行过滤。这些动态产生了人工智能驱动的陪伴、数字哀悼和情感模拟等现象,所有这些都挑战了人类和非人类代理之间的传统界限。法学硕士因此成为一种准他人——尽管缺乏意识或内在生命,却能够引发真正的情感反应的实体。这种情况引发了对情感依赖、真实性美学和情感商品化的批判性反思。忽视这些情感架构有可能使人工交流的社会和伦理风险扁平化,并模糊了法学硕士通过解释可能引起情感反应和培养情感依恋的话语来重塑当代生活情感结构的方式。
{"title":"From meaning to emotions: LLMs as artificial communication partners","authors":"Jorge Luis Morton","doi":"10.1007/s00146-025-02481-w","DOIUrl":"10.1007/s00146-025-02481-w","url":null,"abstract":"<div><p>Since its public release in late 2022, ChatGPT has drawn global attention for its ability to simulate conversation, assist with complex tasks, and generate fluent, human-like text. While much of the debate has focused on issues such as privacy, bias, and automation, the emotional dimension of interacting with such systems remains underexplored. This essay argues that large language models (LLMs) function not only as tools for meaning-making but also as artificial communication partners with affective presence. Drawing on Elena Esposito’s extension of Niklas Luhmann’s systems theory, it reframes communication as a process of selection—utterance, understanding, and response—rather than one of transmission. From this perspective, LLMs are not mere sources of information but interlocutors that participate in emotional resonance, where understanding can transform into feeling. Their outputs do not arise in isolation; rather, they are shaped by layers of human expression embedded in training data and filtered through specific socio technical and socio-affective contexts. These dynamics give rise to phenomena such as AI-driven companionship, digital mourning, and emotional simulation, all of which challenge conventional boundaries between human and non-human agents. LLMs thus emerge as quasi-others—entities capable of eliciting genuine emotional responses despite lacking consciousness or inner life. This condition invites critical reflection on emotional dependency, the aesthetics of authenticity, and the commodification of affect. Overlooking these emotional architectures risks flattening the social and ethical stakes of artificial communication and obscures the ways in which LLMs are reshaping the affective fabric of contemporary life through interpretations of utterances that may evoke emotional responses and foster affective attachments.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"171 - 184"},"PeriodicalIF":4.7,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI, mental and physical labor, and a just policy framework 人工智能,脑力劳动和体力劳动,以及公正的政策框架
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-21 DOI: 10.1007/s00146-025-02490-9
Yotam Harel

This paper outlines an artificial intelligence (AI)-mediated future by examining the influence of AI on the labor market and, consequently, on society at large, and then advocates a just policy framework for policies meant to accommodate this influence of AI. First, the paper introduces a conceptual framework distinguishing between mental labor and physical labor, a distinction that proves useful when analyzing this influence of AI. Afterward, the influence of AI on the labor market is explained. It is argued that considering the so-called generality and the negligible marginal cost of AI, in conjunction with some other assumptions, AI is likely to cause mass technological unemployment of those who were formerly employed as mental laborers and will not switch to physical labor. This, so this paper concludes, is likely to bring about a structural transformation of society, when these individuals are said to form a new class of those who possess ineffective labor power (contra those who possess effective labor power, who form the other class). Finally, this paper puts forward a policy framework designed to deal with this new state of society. It is argued that to meet a minimal normative requirement for the adoption of AI, society should compensate the class of those who possess ineffective labor power so as to accommodate all the hardships common to them.

本文通过研究人工智能对劳动力市场的影响,进而对整个社会的影响,概述了人工智能(AI)介导的未来,然后倡导一个旨在适应人工智能影响的公正政策框架。首先,本文引入了一个区分脑力劳动和体力劳动的概念框架,这种区分在分析人工智能的影响时被证明是有用的。然后,解释了人工智能对劳动力市场的影响。有人认为,考虑到人工智能的所谓普遍性和可忽略不计的边际成本,结合其他一些假设,人工智能很可能导致那些以前被雇佣为脑力劳动者而不愿转向体力劳动的人大规模的技术性失业。这篇论文的结论是,当这些人形成一个由无效劳动力组成的新阶级(与那些拥有有效劳动力的人形成另一个阶级相反)时,这可能会带来社会的结构转变。最后,本文提出了应对这一新的社会状态的政策框架。有人认为,为了满足采用人工智能的最低规范要求,社会应该补偿那些拥有无效劳动力的阶层,以适应他们共同的所有困难。
{"title":"AI, mental and physical labor, and a just policy framework","authors":"Yotam Harel","doi":"10.1007/s00146-025-02490-9","DOIUrl":"10.1007/s00146-025-02490-9","url":null,"abstract":"<div><p>This paper outlines an artificial intelligence (AI)-mediated future by examining the influence of AI on the labor market and, consequently, on society at large, and then advocates a just policy framework for policies meant to accommodate this influence of AI. First, the paper introduces a conceptual framework distinguishing between mental labor and physical labor, a distinction that proves useful when analyzing this influence of AI. Afterward, the influence of AI on the labor market is explained. It is argued that considering the so-called generality and the negligible marginal cost of AI, in conjunction with some other assumptions, AI is likely to cause mass technological unemployment of those who were formerly employed as mental laborers and will not switch to physical labor. This, so this paper concludes, is likely to bring about a structural transformation of society, when these individuals are said to form a new class of those who possess ineffective labor power (contra those who possess effective labor power, who form the other class). Finally, this paper puts forward a policy framework designed to deal with this new state of society. It is argued that to meet a minimal normative requirement for the adoption of AI, society should compensate the class of those who possess ineffective labor power so as to accommodate all the hardships common to them.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"441 - 454"},"PeriodicalIF":4.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02490-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial insights or historical fidelity? Crafting an ethical framework for the use of GenAI in the restoration, reconstruction and recreation of movable cultural heritage 人为的洞见还是对历史的忠实?为在可移动文化遗产的修复、重建和再创造中使用GenAI制定道德框架
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-18 DOI: 10.1007/s00146-025-02454-z
David Ocón, Chunzhi Yin, Jose Luna

This article explores the ethical considerations surrounding using Generative Artificial Intelligence (GenAI) in preserving movable cultural heritage, focusing specifically on its application in restoration, reconstruction, and recreation. While GenAI offers innovative methods for preserving and recreating cultural heritage, it also presents significant ethical challenges. The article reviews current studies on the role of GenAI in heritage preservation alongside relevant ethical guidelines and proposes a tailored ethical framework for its application in movable heritage. The framework addresses several critical ethical concerns, including cultural integrity and sensitivity, accuracy and authenticity, intellectual property rights, sustainability and social impact, and governance and ethical accountability. The article adopts a systematic methodology, combining a comprehensive literature review, thematic analysis, and expert evaluation to develop practical guidelines that ensure GenAI enhances rather than compromises movable heritage’s cultural and historical value. This ethical framework advocates for the responsible use of GenAI, emphasising the importance of collaboration with cultural experts and relevant communities, ensuring transparency in the use of data, and promoting robust ethical governance in AI-driven heritage preservation projects. Ultimately, the framework aims to guide practitioners and institutions in using GenAI in ways that respect and uphold the integrity of cultural heritage while also utilising the benefits of this cutting-edge technology as a tool for better assistance of heritage preservation.

本文探讨了在保护可移动文化遗产中使用生成人工智能(GenAI)的伦理考虑,特别关注其在修复、重建和娱乐中的应用。GenAI为保护和重建文化遗产提供了创新的方法,但它也提出了重大的伦理挑战。本文回顾了基因人工智能在遗产保护中的作用以及相关伦理准则的现有研究,并提出了一个适合基因人工智能在可移动遗产中的应用的伦理框架。该框架解决了几个关键的道德问题,包括文化完整性和敏感性、准确性和真实性、知识产权、可持续性和社会影响,以及治理和道德问责制。这篇文章采用了一种系统的方法,结合了全面的文献综述、专题分析和专家评估来制定实用的指导方针,以确保GenAI增强而不是损害可移动遗产的文化和历史价值。该伦理框架倡导负责任地使用基因人工智能,强调与文化专家和相关社区合作的重要性,确保数据使用的透明度,并在人工智能驱动的遗产保护项目中促进强有力的伦理治理。最终,该框架旨在指导从业者和机构以尊重和维护文化遗产完整性的方式使用基因人工智能,同时利用这一尖端技术的好处作为更好地协助遗产保护的工具。
{"title":"Artificial insights or historical fidelity? Crafting an ethical framework for the use of GenAI in the restoration, reconstruction and recreation of movable cultural heritage","authors":"David Ocón,&nbsp;Chunzhi Yin,&nbsp;Jose Luna","doi":"10.1007/s00146-025-02454-z","DOIUrl":"10.1007/s00146-025-02454-z","url":null,"abstract":"<div><p>This article explores the ethical considerations surrounding using Generative Artificial Intelligence (GenAI) in preserving movable cultural heritage, focusing specifically on its application in restoration, reconstruction, and recreation. While GenAI offers innovative methods for preserving and recreating cultural heritage, it also presents significant ethical challenges. The article reviews current studies on the role of GenAI in heritage preservation alongside relevant ethical guidelines and proposes a tailored ethical framework for its application in movable heritage. The framework addresses several critical ethical concerns, including cultural integrity and sensitivity, accuracy and authenticity, intellectual property rights, sustainability and social impact, and governance and ethical accountability. The article adopts a systematic methodology, combining a comprehensive literature review, thematic analysis, and expert evaluation to develop practical guidelines that ensure GenAI enhances rather than compromises movable heritage’s cultural and historical value. This ethical framework advocates for the responsible use of GenAI, emphasising the importance of collaboration with cultural experts and relevant communities, ensuring transparency in the use of data, and promoting robust ethical governance in AI-driven heritage preservation projects. Ultimately, the framework aims to guide practitioners and institutions in using GenAI in ways that respect and uphold the integrity of cultural heritage while also utilising the benefits of this cutting-edge technology as a tool for better assistance of heritage preservation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"121 - 134"},"PeriodicalIF":4.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can machines think beyond words? A critique of AI’s meaning-production process 机器能超越语言思维吗?对人工智能意义生产过程的批判
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-18 DOI: 10.1007/s00146-025-02485-6
Lihui Wang

Artificial intelligence (AI), especially large language models and generative systems, challenges traditional notions of cognition and meaning production. Despite advanced linguistic fluency, AI fundamentally lacks true semantic understanding due to its disembodied, computational, and non-interactive nature. A triadic framework of matter, energy, and information reveals AI as a materially grounded, energetically constrained, and socially embedded technology, sharply contrasting with inherently embodied and socially interactive human cognition. Beyond technical features, AI’s development and deployment are increasingly shaped by corporate interests, resulting in algorithmic governance and knowledge monopolization that reinforce technocratic ideologies while obscuring material dependencies. Cases, such as biased recruitment algorithms and discriminatory facial recognition, exemplify the socio-political consequences of these dynamics. Dominant narratives, including AI singularity theories, divert attention from these tangible issues. Reframing AI as a socially constructed, non-autonomous socio-technical tool underscores the urgent need for democratic governance and ethical oversight. This shift moves the discourse away from speculative futures toward concrete interventions addressing power imbalances and socio-material conditions in AI development.

人工智能(AI),特别是大型语言模型和生成系统,挑战了传统的认知和意义产生概念。尽管具有先进的语言流畅性,但由于其无实体、计算性和非交互性,人工智能从根本上缺乏真正的语义理解。物质、能量和信息的三位一体框架揭示了人工智能是一种物质基础、能量约束和社会嵌入的技术,与内在体现和社会互动的人类认知形成鲜明对比。除了技术特征之外,人工智能的发展和部署越来越受到企业利益的影响,导致算法治理和知识垄断,强化了技术官僚意识形态,同时模糊了物质依赖。有偏见的招聘算法和歧视性的面部识别等案例说明了这些动态的社会政治后果。包括人工智能奇点理论在内的主流叙事转移了人们对这些有形问题的注意力。将人工智能重新定义为一种社会建构的、非自主的社会技术工具,强调了民主治理和道德监督的迫切需要。这一转变将话语从投机性的未来转向具体的干预措施,解决人工智能发展中的权力失衡和社会物质条件。
{"title":"Can machines think beyond words? A critique of AI’s meaning-production process","authors":"Lihui Wang","doi":"10.1007/s00146-025-02485-6","DOIUrl":"10.1007/s00146-025-02485-6","url":null,"abstract":"<div><p>Artificial intelligence (AI), especially large language models and generative systems, challenges traditional notions of cognition and meaning production. Despite advanced linguistic fluency, AI fundamentally lacks true semantic understanding due to its disembodied, computational, and non-interactive nature. A triadic framework of matter, energy, and information reveals AI as a materially grounded, energetically constrained, and socially embedded technology, sharply contrasting with inherently embodied and socially interactive human cognition. Beyond technical features, AI’s development and deployment are increasingly shaped by corporate interests, resulting in algorithmic governance and knowledge monopolization that reinforce technocratic ideologies while obscuring material dependencies. Cases, such as biased recruitment algorithms and discriminatory facial recognition, exemplify the socio-political consequences of these dynamics. Dominant narratives, including AI singularity theories, divert attention from these tangible issues. Reframing AI as a socially constructed, non-autonomous socio-technical tool underscores the urgent need for democratic governance and ethical oversight. This shift moves the discourse away from speculative futures toward concrete interventions addressing power imbalances and socio-material conditions in AI development.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"429 - 440"},"PeriodicalIF":4.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Will AI and humanity go to war? 人工智能和人类会发生战争吗?
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-17 DOI: 10.1007/s00146-025-02460-1
Simon Goldstein

This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, by the uninterpretability of AI systems, and by differences in how AIs and humans analyze information. Commitment problems would make it difficult for AIs and humans to strike credible bargains. Commitment problems could arise from power shifts, rapid and discontinuous increases in AI capabilities. Commitment problems could also arise from missing focal points, where AIs and humans fail to effectively coordinate on policies to limit war. In the face of this heightened chance of war, the paper proposes several interventions. War can be made less likely by improving the measurement of AI capabilities, capping improvements in AI capabilities, designing AI systems to be similar to humans, and by allowing AI systems to participate in democratic political institutions.

这篇论文首次对人工智能和人类爆发战争的可能性进行了仔细分析。本文关注的是通用人工智能的案例,即具有广泛人类能力的人工智能。本文采用战争的讨价还价模型,将标准的战争原因应用于人工智能/人类冲突的特殊情况。本文认为,在人工智能/人类冲突中,信息失败和承诺问题尤其可能出现。信息失败将由衡量人工智能能力的困难、人工智能系统的不可解释性以及人工智能和人类分析信息的差异所驱动。承诺问题将使人工智能和人类难以达成可信的交易。权力转移、人工智能能力的快速和不连续增长可能会引发承诺问题。承诺问题还可能源于缺少焦点,即人工智能和人类无法有效协调限制战争的政策。面对战争爆发的可能性增大,本文提出了几项干预措施。通过改进对人工智能能力的衡量,限制人工智能能力的改进,设计与人类相似的人工智能系统,以及允许人工智能系统参与民主政治机构,可以降低战争的可能性。
{"title":"Will AI and humanity go to war?","authors":"Simon Goldstein","doi":"10.1007/s00146-025-02460-1","DOIUrl":"10.1007/s00146-025-02460-1","url":null,"abstract":"<div><p>This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that <i>information failures</i> and <i>commitment problems</i> are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, by the uninterpretability of AI systems, and by differences in how AIs and humans analyze information. Commitment problems would make it difficult for AIs and humans to strike credible bargains. Commitment problems could arise from <i>power shifts</i>, rapid and discontinuous increases in AI capabilities. Commitment problems could also arise from <i>missing focal points</i>, where AIs and humans fail to effectively coordinate on policies to limit war. In the face of this heightened chance of war, the paper proposes several interventions. War can be made less likely by improving the measurement of AI capabilities, capping improvements in AI capabilities, designing AI systems to be similar to humans, and by allowing AI systems to participate in democratic political institutions.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"321 - 334"},"PeriodicalIF":4.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02460-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues 比较角色扮演大型语言模型和人类专家在两极分化的美国政治问题上的说服力
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-16 DOI: 10.1007/s00146-025-02464-x
Kobi Hackenburg, Lujain Ibrahim, Ben M. Tappin, Manos Tsakiris

Advances in large language models (LLMs) could significantly disrupt political communication. In a large-scale pre-registered experiment (n = 4955), we prompted GPT-4 to generate persuasive messages impersonating the language and beliefs of U.S. political parties—a technique we term “partisan role-play”—and directly compared their persuasiveness to that of human persuasion experts. In aggregate, the persuasive impact of role-playing messages generated by GPT-4 was not significantly different from that of non-role-playing messages. However, the persuasive impact of GPT-4 rivaled, and on some issues exceeded, that of the human experts. Taken together, our findings suggest thatcontrary to popular concerninstructing current LLMs to role-play as partisans offers limited persuasive advantage, but also that current LLMs can rival and even exceed the persuasiveness of human experts. These results potentially portend widespread adoption of AI tools by persuasion campaigns, with important implications for the role of AI in politics and democracy.

大型语言模型(llm)的进步可能会严重破坏政治沟通。在一项大规模的预注册实验中(n = 4955),我们促使GPT-4模拟美国政党的语言和信仰生成有说服力的信息——我们称之为“党派角色扮演”的技术——并直接将它们的说服力与人类说服专家的说服力进行比较。总体而言,GPT-4生成的角色扮演信息的说服力影响与非角色扮演信息的说服力影响无显著差异。然而,GPT-4的说服力可以与人类专家相媲美,在某些问题上甚至超过了人类专家。综上所述,我们的研究结果表明,与普遍关注的相反,指导当前法学硕士扮演党派角色的优势有限,但当前法学硕士的说服力可以与人类专家相媲美,甚至超过人类专家。这些结果可能预示着人工智能工具在说服活动中的广泛应用,对人工智能在政治和民主中的作用具有重要意义。
{"title":"Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues","authors":"Kobi Hackenburg,&nbsp;Lujain Ibrahim,&nbsp;Ben M. Tappin,&nbsp;Manos Tsakiris","doi":"10.1007/s00146-025-02464-x","DOIUrl":"10.1007/s00146-025-02464-x","url":null,"abstract":"<div><p>Advances in large language models (LLMs) could significantly disrupt political communication. In a large-scale pre-registered experiment (<i>n</i> = 4955), we prompted GPT-4 to generate persuasive messages impersonating the language and beliefs of U.S. political parties—a technique we term “partisan role-play”—and directly compared their persuasiveness to that of human persuasion experts. In aggregate, the persuasive impact of role-playing messages generated by GPT-4 was not significantly different from that of non-role-playing messages. However, the persuasive impact of GPT-4 rivaled, and on some issues exceeded, that of the human experts. Taken together, our findings suggest that<i>—</i>contrary to popular concern<i>—</i>instructing current LLMs to role-play as partisans offers limited persuasive advantage, but also that current LLMs can rival and even exceed the persuasiveness of human experts. These results potentially portend widespread adoption of AI tools by persuasion campaigns, with important implications for the role of AI in politics and democracy. </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"351 - 361"},"PeriodicalIF":4.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02464-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NLP as language ideology: discursive and algorithmic constructions of ‘toxic’ language in machine learning research 作为语言意识形态的NLP:机器学习研究中“有毒”语言的话语和算法结构
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-16 DOI: 10.1007/s00146-025-02463-y
Gabriella Chronis

This article considers natural language processing research as a language-ideological practice, looking specifically at the task of toxic language detection, which impacts nearly everybody online through automated content moderation. Industry discourse constructs the category of toxicity through a series of oppositions between civil/healthy/referential/rational and unhealthy/toxic/indexical/emotional. Examples from a toxicity correction dataset demonstrate how this ideology can become encoded algorithmically: a focus on preserving referential content in text “detoxification” results in neglect of important poetic, expressive, and social-indexical functions. Overall, the discursive framings of toxicity construct the ideal speaker in terms of what I call a “referentialist” language ideology, which values rational debate in the (regulated) liberal-democratic public sphere. Ultimately, toxicity detection and other metapragmatic tasks do not merely model the existing pragmatic categories but actively construct them. Toxicity in particular potentially reinforces exclusionary norms of white maleness and promotes online subjectivities that are useful (profitable) to the commercial platforms that shaped the task. Since there is no avoiding NLP as language-ideological practice, independent NLP researchers must acknowledge the political potency of their work by continually reflecting on the categories they work with in relation to models of social-political formation.

本文认为自然语言处理研究是一种语言意识形态的实践,特别关注有毒语言检测的任务,它通过自动内容审核影响到几乎所有在线的人。行业话语通过公民/健康/参考/理性与不健康/有毒/指数/情感之间的一系列对立来构建毒性范畴。来自毒性校正数据集的例子展示了这种意识形态是如何被算法编码的:专注于在文本“解毒”中保留参考内容,导致忽视了重要的诗歌、表达和社会索引功能。总的来说,毒性的话语框架根据我所谓的“参照主义”语言意识形态构建了理想的说话者,这种意识形态重视(受管制的)自由民主公共领域中的理性辩论。最终,毒性检测和其他元语用任务不仅仅是对现有语用类别进行建模,而是积极地构建它们。尤其是毒性,可能会强化白人男性的排他性规范,并促进对塑造这项任务的商业平台有用(有利可图)的在线主观性。既然不可避免地要将NLP作为一种语言意识形态实践,独立的NLP研究者必须通过不断地反思他们所研究的与社会政治形成模型相关的范畴来承认他们工作的政治效力。
{"title":"NLP as language ideology: discursive and algorithmic constructions of ‘toxic’ language in machine learning research","authors":"Gabriella Chronis","doi":"10.1007/s00146-025-02463-y","DOIUrl":"10.1007/s00146-025-02463-y","url":null,"abstract":"<div><p>This article considers natural language processing research as a language-ideological practice, looking specifically at the task of toxic language detection, which impacts nearly everybody online through automated content moderation. Industry discourse constructs the category of <i>toxicity</i> through a series of oppositions between civil/healthy/referential/rational and unhealthy/toxic/indexical/emotional. Examples from a toxicity correction dataset demonstrate how this ideology can become encoded algorithmically: a focus on preserving referential content in text “detoxification” results in neglect of important poetic, expressive, and social-indexical functions. Overall, the discursive framings of toxicity construct the ideal speaker in terms of what I call a “referentialist” language ideology, which values rational debate in the (regulated) liberal-democratic public sphere. Ultimately, toxicity detection and other metapragmatic tasks do not merely model the existing pragmatic categories but actively construct them. Toxicity in particular potentially reinforces exclusionary norms of white maleness and promotes online subjectivities that are useful (profitable) to the commercial platforms that shaped the task. Since there is no avoiding NLP as language-ideological practice, independent NLP researchers must acknowledge the political potency of their work by continually reflecting on the categories they work with in relation to models of social-political formation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"61 - 77"},"PeriodicalIF":4.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ghost hack: a distributed control architecture for synthetic personality in human–AI interaction Ghost hack:用于人类与ai交互中的合成人格的分布式控制架构
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-15 DOI: 10.1007/s00146-025-02483-8
Takayuki Fujimoto

This study aims to reconstruct human–AI relationality via Artificial Persona, offering a theoretical and empirical validation of eXtended Intelligence (XI) through the design, implementation, and evaluation of a next-generation AI system built on large language models (LLMs). In contrast to traditional AI development focused on speed and computational metrics, we propose TAK08—a persona architecture optimized for relational dynamics and capable of dynamically reflecting user-specific sensibilities and ethics.TAK08 is implemented on OpenAI’s GPT-4.1 and integrates six personality phases, encompassing visual identity, speech patterns, emotional response, and ethical control. Drawing from sociological role-taking theory and the “Ghost” concept from Ghost in the Shell, it employs a novel “Ghost Hack” method to enable co-creative, user–AI interaction. Through comparative evaluation and statistical analyses (paired t-test and repeated measures ANOVA), TAK08 demonstrated significant improvement in creativity and expressiveness, empirically validating XI as a paradigm for augmenting human capacities. Rather than relying on restrictive policy frameworks, this study proposes a user-centered, ethically aligned design approach for AI that enables dynamic adaptability within official boundaries. It envisions a sustainable, co-evolutionary human–AI relationship, offering an alternative to dystopian models of AI dominance.

本研究旨在通过人工角色重建人类与人工智能的关系,通过设计、实现和评估基于大型语言模型(llm)的下一代人工智能系统,为扩展智能(XI)提供理论和经验验证。与专注于速度和计算指标的传统人工智能开发相比,我们提出了tak08——一种针对关系动态优化的人物角色架构,能够动态反映用户特定的情感和道德。TAK08在OpenAI的GPT-4.1上实现,集成了六个人格阶段,包括视觉识别、语言模式、情绪反应和道德控制。它借鉴了社会学角色扮演理论和《攻壳机动队》中的“幽灵”概念,采用了一种新颖的“幽灵黑客”方法来实现共同创造、用户与人工智能的互动。通过比较评价和统计分析(配对t检验和重复测量方差分析),TAK08显示出创造力和表达能力的显著改善,实证验证了XI作为增强人类能力的范例。本研究提出了一种以用户为中心、符合伦理的人工智能设计方法,而不是依赖于限制性的政策框架,该方法可以在官方范围内实现动态适应性。它设想了一种可持续的、共同进化的人类与人工智能的关系,为人工智能主导的反乌托邦模式提供了另一种选择。
{"title":"Ghost hack: a distributed control architecture for synthetic personality in human–AI interaction","authors":"Takayuki Fujimoto","doi":"10.1007/s00146-025-02483-8","DOIUrl":"10.1007/s00146-025-02483-8","url":null,"abstract":"<div><p>This study aims to reconstruct human–AI relationality via Artificial Persona, offering a theoretical and empirical validation of eXtended Intelligence (XI) through the design, implementation, and evaluation of a next-generation AI system built on large language models (LLMs). In contrast to traditional AI development focused on speed and computational metrics, we propose TAK08—a persona architecture optimized for relational dynamics and capable of dynamically reflecting user-specific sensibilities and ethics.TAK08 is implemented on OpenAI’s GPT-4.1 and integrates six personality phases, encompassing visual identity, speech patterns, emotional response, and ethical control. Drawing from sociological role-taking theory and the “Ghost” concept from <i>Ghost in the Shell</i>, it employs a novel “Ghost Hack” method to enable co-creative, user–AI interaction. Through comparative evaluation and statistical analyses (paired <i>t</i>-test and repeated measures ANOVA), TAK08 demonstrated significant improvement in creativity and expressiveness, empirically validating XI as a paradigm for augmenting human capacities. Rather than relying on restrictive policy frameworks, this study proposes a user-centered, ethically aligned design approach for AI that enables dynamic adaptability within official boundaries. It envisions a sustainable, co-evolutionary human–AI relationship, offering an alternative to dystopian models of AI dominance.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"203 - 219"},"PeriodicalIF":4.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1