首页 > 最新文献

AI & Society最新文献

英文 中文
On the problems of training generative AI: towards a hybrid approach combining technical and non-technical alignment strategies 关于生成式人工智能的训练问题:走向技术与非技术对齐策略相结合的混合方法
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-07 DOI: 10.1007/s00146-025-02445-0
Tsehaye Haidemariam, Anne-Britt Gran

This study examines the ethical, legal, and copyright challenges in training generative AI on a large-scale text dataset, using Books3 as a case study. This dataset, used for training foundation models such as GPT, BERT, Meta’s Llama, and StableLM includes pirated works by nearly 200,000 authors from various countries, raising concerns about intellectual property rights, dataset integrity, and transparency. Our analysis of the initial 99 ISBNs reveals significant biases, including linguistic imbalance, genre skew, and temporal limitations. AI similarity analysis shows that AI-generated text closely mirrors human-written content, suggesting that AI reconstructs patterns in words rather than copying verbatim. However, some parts of the analysis also indicate that AI outputs frequently paraphrase existing content rather than generating wholly independent text, complicating issues of copyright compliance and economic compensation for authors and publishers. These findings highlight the need for improved dataset transparency, ethical considerations, and legal safeguards in generative AI training. We propose a scalable hybrid governance framework integrating technical design-based solutions with regulatory and institutional strategies to ensure responsible AI development. This approach advances AI governance by addressing dataset integrity, source attribution, and evolving ethical, legal and economic challenges in an increasingly AI-driven society.

本研究以Books3为例,探讨了在大规模文本数据集上训练生成式人工智能时面临的伦理、法律和版权挑战。该数据集用于训练GPT、BERT、Meta’s Llama和StableLM等基础模型,包括来自不同国家的近20万作者的盗版作品,引发了对知识产权、数据集完整性和透明度的担忧。我们对最初99个国际书号的分析揭示了显著的偏差,包括语言失衡、体裁倾斜和时间限制。人工智能相似度分析表明,人工智能生成的文本与人类编写的内容非常接近,这表明人工智能在单词中重建模式,而不是逐字复制。然而,分析的某些部分也表明,人工智能输出经常改写现有内容,而不是生成完全独立的文本,使版权合规和作者和出版商的经济补偿问题复杂化。这些发现强调了在生成式人工智能训练中提高数据集透明度、道德考虑和法律保障的必要性。我们提出了一个可扩展的混合治理框架,将基于技术设计的解决方案与监管和制度战略相结合,以确保负责任的人工智能发展。这种方法通过解决数据集完整性、来源归属以及在日益人工智能驱动的社会中不断发展的道德、法律和经济挑战来推进人工智能治理。
{"title":"On the problems of training generative AI: towards a hybrid approach combining technical and non-technical alignment strategies","authors":"Tsehaye Haidemariam,&nbsp;Anne-Britt Gran","doi":"10.1007/s00146-025-02445-0","DOIUrl":"10.1007/s00146-025-02445-0","url":null,"abstract":"<div><p>This study examines the ethical, legal, and copyright challenges in training generative AI on a large-scale text dataset, using Books3 as a case study. This dataset, used for training foundation models such as GPT, BERT, Meta’s Llama, and StableLM includes pirated works by nearly 200,000 authors from various countries, raising concerns about intellectual property rights, dataset integrity, and transparency. Our analysis of the initial 99 ISBNs reveals significant biases, including linguistic imbalance, genre skew, and temporal limitations. AI similarity analysis shows that AI-generated text closely mirrors human-written content, suggesting that AI reconstructs patterns in words rather than copying verbatim. However, some parts of the analysis also indicate that AI outputs frequently paraphrase existing content rather than generating wholly independent text, complicating issues of copyright compliance and economic compensation for authors and publishers. These findings highlight the need for improved dataset transparency, ethical considerations, and legal safeguards in generative AI training. We propose a scalable hybrid governance framework integrating technical design-based solutions with regulatory and institutional strategies to ensure responsible AI development. This approach advances AI governance by addressing dataset integrity, source attribution, and evolving ethical, legal and economic challenges in an increasingly AI-driven society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"629 - 654"},"PeriodicalIF":4.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02445-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multidisciplinary analysis of transparent AI-driven toxicity detection tools for civic engagement platforms 对用于公民参与平台的透明人工智能驱动的毒性检测工具进行多学科分析
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-07 DOI: 10.1007/s00146-025-02424-5
Maria Zangl, Iliana Loi, Panagiotis Zachos, Michael Bedek, Emmanouil Dimogerontakis, Charikleia-Eleni Nikolaou, Dietrich Albert, Konstantinos Moustakas

Toxic speech on online civic engagement platforms (CEPs) disproportionately affects marginalized groups and threatens the diversity of citizen voices. However, the deployment of AI-driven toxic speech detection (TSD) tools for CEPs faces complex challenges from legal, psychological, and technical perspectives that remain insufficiently explored. We present a first-of-its-kind interdisciplinary review of these challenges, focusing on the explainability of TSD systems, their compliance with European legal standards and offer a roadmap for ethical deployment. Our review reveals three main findings. First, although transparency in AI decision-making is necessary from both legal and psychological perspectives, assessing the explainability of AI-driven TSD tools, and their compliance with legal regulations within Europe, remains a significant challenge. Second, current explainability approaches, ranging from toxic span identification to advanced explainable AI methods, lack standardized metrics. This makes it difficult to assess their reliability and appropriateness for CEPs. Third, despite the importance of TSD, frameworks and best practices for CEPs are still lacking in existing literature. This paper aims to fill this gap by providing a holistic perspective on the challenges and solutions for TSD deployment. It provides the foundation for collaborative efforts to develop and standardize metrics, evaluation protocols, and best practices that can ensure AI decisions in CEPs are transparent, accountable, and aligned with users’ needs.

在线公民参与平台(cep)上的有毒言论对边缘群体的影响不成比例,并威胁到公民声音的多样性。然而,为cep部署人工智能驱动的有毒语音检测(TSD)工具面临着法律、心理和技术方面的复杂挑战,这些挑战仍未得到充分探索。我们对这些挑战进行了首次跨学科审查,重点关注TSD系统的可解释性,它们是否符合欧洲法律标准,并提供了道德部署的路线图。我们的回顾揭示了三个主要发现。首先,尽管从法律和心理角度来看,人工智能决策的透明度是必要的,但评估人工智能驱动的TSD工具的可解释性,以及它们对欧洲法律法规的遵守,仍然是一个重大挑战。其次,目前的可解释性方法,从毒性跨度识别到先进的可解释人工智能方法,都缺乏标准化的指标。这使得很难评估它们对cep的可靠性和适用性。第三,尽管TSD很重要,但在现有文献中仍然缺乏cep的框架和最佳实践。本文旨在通过提供TSD部署的挑战和解决方案的整体视角来填补这一空白。它为开发和标准化指标、评估协议和最佳实践的协作工作提供了基础,这些工作可以确保cep中的人工智能决策透明、负责并与用户需求保持一致。
{"title":"A multidisciplinary analysis of transparent AI-driven toxicity detection tools for civic engagement platforms","authors":"Maria Zangl,&nbsp;Iliana Loi,&nbsp;Panagiotis Zachos,&nbsp;Michael Bedek,&nbsp;Emmanouil Dimogerontakis,&nbsp;Charikleia-Eleni Nikolaou,&nbsp;Dietrich Albert,&nbsp;Konstantinos Moustakas","doi":"10.1007/s00146-025-02424-5","DOIUrl":"10.1007/s00146-025-02424-5","url":null,"abstract":"<div><p>Toxic speech on online civic engagement platforms (CEPs) disproportionately affects marginalized groups and threatens the diversity of citizen voices. However, the deployment of AI-driven toxic speech detection (TSD) tools for CEPs faces complex challenges from legal, psychological, and technical perspectives that remain insufficiently explored. We present a first-of-its-kind interdisciplinary review of these challenges, focusing on the explainability of TSD systems, their compliance with European legal standards and offer a roadmap for ethical deployment. Our review reveals three main findings. First, although transparency in AI decision-making is necessary from both legal and psychological perspectives, assessing the explainability of AI-driven TSD tools, and their compliance with legal regulations within Europe, remains a significant challenge. Second, current explainability approaches, ranging from toxic span identification to advanced explainable AI methods, lack standardized metrics. This makes it difficult to assess their reliability and appropriateness for CEPs. Third, despite the importance of TSD, frameworks and best practices for CEPs are still lacking in existing literature. This paper aims to fill this gap by providing a holistic perspective on the challenges and solutions for TSD deployment. It provides the foundation for collaborative efforts to develop and standardize metrics, evaluation protocols, and best practices that can ensure AI decisions in CEPs are transparent, accountable, and aligned with users’ needs.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"527 - 544"},"PeriodicalIF":4.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02424-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shame in the machine: affective accountability and the ethics of AI 机器中的羞耻:情感责任和人工智能的伦理
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-06 DOI: 10.1007/s00146-025-02472-x
Rachel McNealis

The cultural weaponization of shame surrounding the use of artificial intelligence (AI) tools like ChatGPT often redirects ethical scrutiny away from systemic concerns and toward individual users. Drawing on Sara Ahmed’s affect theory, this paper argues that cultural narratives of "AI shaming" function as moral displacement that redirects scrutiny away from the environmental costs, exploitative labor practices, and corporate monopolization defining contemporary AI development. The analysis examines how shame operates across academic and professional settings to create "effort anxiety" that demands both visible human labor and accelerated productivity. Current discourse treats AI use as a personal virtue problem and obscures the carbon-intensive data centers, underpaid content moderators, and proprietary knowledge systems that enable these technologies. Instead of eliminating shame, the paper proposes redirecting it toward collective accountability for AI’s systemic harms. Environmental degradation, algorithmic bias, and extractive infrastructures represent the true ethical frontier of artificial intelligence. Policy frameworks, educational interventions, and governance structures offer pathways for transforming shame from individual punishment into institutional reform. The stakes extend beyond AI itself: as emerging technologies reshape society, the patterns of moral responsibility established now will determine whether innovation serves collective flourishing or perpetuates existing inequalities. Shame can become a vehicle for institutional critique and systemic accountability if we redirect its focus from individual users to the powerful corporations, governance structures, and infrastructural systems that profit from AI’s rapid expansion.

围绕ChatGPT等人工智能(AI)工具使用的羞耻感文化武器化,往往将伦理审查从系统问题转向个人用户。根据萨拉·艾哈迈德的情感理论,本文认为,“人工智能羞辱”的文化叙事起到了道德位移的作用,将人们的注意力从环境成本、剥削劳工的做法和定义当代人工智能发展的企业垄断上转移开。该分析考察了羞耻感是如何在学术和专业环境中产生“努力焦虑”的,这种焦虑既需要可见的人力劳动,也需要加快生产率。当前的话语将人工智能的使用视为个人道德问题,并模糊了碳密集型数据中心、报酬过低的内容版主和实现这些技术的专有知识系统。这篇论文并没有消除羞耻感,而是建议将羞耻感转向对人工智能系统性危害的集体问责。环境退化、算法偏差和采掘基础设施代表了人工智能真正的伦理前沿。政策框架、教育干预和治理结构为将羞耻感从个人惩罚转变为制度改革提供了途径。利害关系超出了人工智能本身:随着新兴技术重塑社会,现在确立的道德责任模式将决定创新是服务于集体繁荣,还是延续现有的不平等。如果我们将关注的焦点从个人用户转移到从人工智能的快速扩张中获利的强大公司、治理结构和基础设施系统上,羞耻就可以成为制度批评和系统问责的工具。
{"title":"Shame in the machine: affective accountability and the ethics of AI","authors":"Rachel McNealis","doi":"10.1007/s00146-025-02472-x","DOIUrl":"10.1007/s00146-025-02472-x","url":null,"abstract":"<div><p>The cultural weaponization of shame surrounding the use of artificial intelligence (AI) tools like ChatGPT often redirects ethical scrutiny away from systemic concerns and toward individual users. Drawing on Sara Ahmed’s affect theory, this paper argues that cultural narratives of \"AI shaming\" function as moral displacement that redirects scrutiny away from the environmental costs, exploitative labor practices, and corporate monopolization defining contemporary AI development. The analysis examines how shame operates across academic and professional settings to create \"effort anxiety\" that demands both visible human labor and accelerated productivity. Current discourse treats AI use as a personal virtue problem and obscures the carbon-intensive data centers, underpaid content moderators, and proprietary knowledge systems that enable these technologies. Instead of eliminating shame, the paper proposes redirecting it toward collective accountability for AI’s systemic harms. Environmental degradation, algorithmic bias, and extractive infrastructures represent the true ethical frontier of artificial intelligence. Policy frameworks, educational interventions, and governance structures offer pathways for transforming shame from individual punishment into institutional reform. The stakes extend beyond AI itself: as emerging technologies reshape society, the patterns of moral responsibility established now will determine whether innovation serves collective flourishing or perpetuates existing inequalities. Shame can become a vehicle for institutional critique and systemic accountability if we redirect its focus from individual users to the powerful corporations, governance structures, and infrastructural systems that profit from AI’s rapid expansion.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"403 - 413"},"PeriodicalIF":4.7,"publicationDate":"2025-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We need a Belmont report for AI 我们需要一份贝尔蒙特报告给人工智能
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-05 DOI: 10.1007/s00146-025-02461-0
Kevin Patton

The Belmont Report has a place of great importance in American biomedical research ethics. This paper argues that a similar kind of report, and the legal infrastructure that birthed it, is needed in the United States if we are to preempt a great many of the potential issues that are on the horizon with artificial intelligence (AI). What makes the Belmont Report so important is not just that it established a new basis for how medical professionals ought to treat their patients and experiment participants; it did so with the force of law. Establishing an equivalent legal framework for AI is going to take tremendous buy-in from a variety of private and public actors in the United States. The model afforded by the Belmont Report is well suited to generate such buy-in. While this may seem like a daunting task given various polarizing issues at play in society today, the context that produced the Belmont Report was quite fractious itself. It is the position of this paper that a similarly styled approach to AI regulation can succeed in proactively limiting the harms of AI’s use (and abuse).

贝尔蒙特报告在美国生物医学研究伦理中占有重要地位。本文认为,如果我们要先发制人地解决人工智能(AI)即将出现的许多潜在问题,美国就需要一份类似的报告,以及催生该报告的法律基础设施。贝尔蒙特报告之所以如此重要,不仅是因为它为医疗专业人员应该如何对待病人和实验参与者建立了新的基础;它这样做是有法律效力的。为人工智能建立一个同等的法律框架,将需要美国各种私人和公共行为者的大力支持。贝尔蒙特报告提供的模式非常适合产生这种支持。虽然考虑到当今社会各种两极分化的问题,这似乎是一项艰巨的任务,但产生贝尔蒙特报告的背景本身就相当棘手。本文的立场是,类似的人工智能监管方法可以成功地主动限制人工智能使用(和滥用)的危害。
{"title":"We need a Belmont report for AI","authors":"Kevin Patton","doi":"10.1007/s00146-025-02461-0","DOIUrl":"10.1007/s00146-025-02461-0","url":null,"abstract":"<div><p>The Belmont Report has a place of great importance in American biomedical research ethics. This paper argues that a similar kind of report, and the legal infrastructure that birthed it, is needed in the United States if we are to preempt a great many of the potential issues that are on the horizon with artificial intelligence (AI). What makes the Belmont Report so important is not just that it established a new basis for how medical professionals ought to treat their patients and experiment participants; it did so with the force of law. Establishing an equivalent legal framework for AI is going to take tremendous buy-in from a variety of private and public actors in the United States. The model afforded by the Belmont Report is well suited to generate such buy-in. While this may seem like a daunting task given various polarizing issues at play in society today, the context that produced the Belmont Report was quite fractious itself. It is the position of this paper that a similarly styled approach to AI regulation can succeed in proactively limiting the harms of AI’s use (and abuse).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"335 - 340"},"PeriodicalIF":4.7,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02461-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence and displacement of human work: towards an ethical A.I. inclusion in work through Laborem exercens 人工智能和人类工作的取代:通过Laborem练习实现人工智能在工作中的道德包容
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-05 DOI: 10.1007/s00146-025-02455-y
Patricia Joy Mobilla

With the rapid advances in science and technology, the introduction of Artificial Intelligence in the workplace enhanced information processing and identification of data patterns. At the same time, this is accompanied by an increasing trend of worker displacement with the promise of higher profitability and work efficiency. In the Philippines alone, 36% of jobs are predicted to replace human workers with A.I. machines. On the other hand, John Paul II’s social encyclical, Laborem exercens understands work as any activity by humans, both manual and intellectual, by which one earns his daily necessities and contributes to the unceasing elevation of the cultural and moral level of the society. Hence, it is always faced with fresh fears and threats that are connected with dimension of human existence. For this, the encyclical emphasizes on the priority of labor over capital in the face of technological advancements that are in conflict of the authentic meaning of work. This study aims to propose an ethical framework for A.I. adoption in workplace through the encyclical Laborem exercens.

随着科学技术的飞速发展,人工智能在工作场所的引入增强了信息处理和数据模式的识别。与此同时,伴随着更高盈利能力和工作效率的工人被取代的趋势也在增加。仅在菲律宾,预计36%的工作岗位将被人工智能机器取代。另一方面,约翰·保罗二世的社会通谕《劳动实践》(Laborem exercise)将工作理解为人类的任何活动,包括体力和智力活动,人们通过这些活动赚取日常必需品,并为不断提高社会的文化和道德水平做出贡献。因此,它总是面临着与人类存在的维度有关的新的恐惧和威胁。为此,通谕强调,面对与工作的真正意义相冲突的技术进步,劳动优先于资本。本研究旨在通过通谕Laborem的练习,为人工智能在工作场所的应用提出一个道德框架。
{"title":"Artificial intelligence and displacement of human work: towards an ethical A.I. inclusion in work through Laborem exercens","authors":"Patricia Joy Mobilla","doi":"10.1007/s00146-025-02455-y","DOIUrl":"10.1007/s00146-025-02455-y","url":null,"abstract":"<div><p>With the rapid advances in science and technology, the introduction of Artificial Intelligence in the workplace enhanced information processing and identification of data patterns. At the same time, this is accompanied by an increasing trend of worker displacement with the promise of higher profitability and work efficiency. In the Philippines alone, 36% of jobs are predicted to replace human workers with A.I. machines. On the other hand, John Paul II’s social encyclical, <i>Laborem exercens</i> understands work as any activity by humans, both manual and intellectual, by which one earns his daily necessities and contributes to the unceasing elevation of the cultural and moral level of the society. Hence, it is always faced with fresh fears and threats that are connected with dimension of human existence. For this, the encyclical emphasizes on the priority of labor over capital in the face of technological advancements that are in conflict of the authentic meaning of work. This study aims to propose an ethical framework for A.I. adoption in workplace through the encyclical <i>Laborem exercens.</i> </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"341 - 350"},"PeriodicalIF":4.7,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bodies in the metaverse: Is there “someone” out there? 虚拟世界中的身体:那里有“某个人”吗?
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-04 DOI: 10.1007/s00146-025-02439-y
Luca Valera, Florencia Alamos, Paulina Ramos, Tomás Vera

The metaverse, enabled by technologies, such as virtual reality (VR), augmented reality (AR), and artificial intelligence (AI), challenges our traditional understanding of reality, identity, and corporeality. It offers immersive virtual experiences that blur the lines between the real and the synthetic, creating new opportunities for human interaction, expression, and self-exploration. In this paper, we explore (i) the technological advancements driving the development of the metaverse and its potential applications across various sectors; (ii) the avatar concept, a digital representation of oneself within the metaverse, and its implications for identity and presence; and (iii) the profound impact on our perception and understanding of reality and the complex philosophical and ethical questions it raises. The metaverse stands as a novel frontier in human experience, presenting both opportunities and challenges. It demands a critical reassessment of how we perceive embodiment, awareness, and identity in this digital era. While we embrace its potential to expand human capabilities, we must remain mindful of its risks and ensure its ethical and responsible social integration.

虚拟现实(VR)、增强现实(AR)和人工智能(AI)等技术使虚拟世界成为可能,挑战了我们对现实、身份和物质的传统理解。它提供了身临其境的虚拟体验,模糊了真实与合成之间的界限,为人类互动、表达和自我探索创造了新的机会。在本文中,我们探讨了(i)推动元宇宙发展的技术进步及其在各个领域的潜在应用;(ii)虚拟化身概念,即虚拟世界中自我的数字表现形式,及其对身份和存在的影响;(三)对我们感知和理解现实的深刻影响,以及由此引发的复杂的哲学和伦理问题。虚拟世界是人类体验的一个新领域,既带来了机遇,也带来了挑战。它要求我们在这个数字时代重新评估我们如何看待化身、意识和身份。在我们接受其扩大人类能力的潜力的同时,我们必须牢记其风险,并确保其合乎道德和负责任的社会一体化。
{"title":"Bodies in the metaverse: Is there “someone” out there?","authors":"Luca Valera,&nbsp;Florencia Alamos,&nbsp;Paulina Ramos,&nbsp;Tomás Vera","doi":"10.1007/s00146-025-02439-y","DOIUrl":"10.1007/s00146-025-02439-y","url":null,"abstract":"<div><p>The metaverse, enabled by technologies, such as virtual reality (VR), augmented reality (AR), and artificial intelligence (AI), challenges our traditional understanding of reality, identity, and corporeality. It offers immersive virtual experiences that blur the lines between the real and the synthetic, creating new opportunities for human interaction, expression, and self-exploration. In this paper, we explore (i) the technological advancements driving the development of the metaverse and its potential applications across various sectors; (ii) the avatar concept, a digital representation of oneself within the metaverse, and its implications for identity and presence; and (iii) the profound impact on our perception and understanding of reality and the complex philosophical and ethical questions it raises. The metaverse stands as a novel frontier in human experience, presenting both opportunities and challenges. It demands a critical reassessment of how we perceive embodiment, awareness, and identity in this digital era. While we embrace its potential to expand human capabilities, we must remain mindful of its risks and ensure its ethical and responsible social integration.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"99 - 110"},"PeriodicalIF":4.7,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02439-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commodification in academic writing: a comparative analysis of two LLM apps 学术写作中的商品化:两个法学硕士应用程序的比较分析
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-04 DOI: 10.1007/s00146-025-02446-z
Sebastian Weydner-Volkmann

This paper investigates the impact of Large Language Model (LLM)-assisted writing on reflective thinking, building on existing adaptations of Albert Borgmann’s device paradigm to Don Ihde’s postphenomenology. Academic writing can facilitate engagement with our beliefs and pre-judgments, making it highly conducive to reflective thinking. However, generative AI tools, such as OpenAI’s ChatGPT and Microsoft Word Copilot, may undermine such meaningful engagement as they ‘disburden’ users of the effort inherent in reflective writing. Still, we fall short when we leave unexamined the kinds of uses each writing app inclines its users to pursue. Despite using the same LLM, a cross-comparison reveals that the user interface (UI) design of ChatGPT and Word Copilot affords distinct forms of interaction: ChatGPT’s UI design may, in principle, facilitate reflective engagement through conversational interactions, prompting users to formulate and engage with their beliefs on a given topic. In contrast, Word Copilot emphasizes automated document production, making a similar kind of engaging use unviable. As a conceptual basis for the argument, this paper extends Ihde’s history of writing ‘technics’ and brings it together with recent conceptual developments in postphenomenology by discussing the apps in terms of ‘quasi-materiality’ of application UIs and the affordances they offer as part of ‘multistabilities’. This paper concludes with a call for academic writers to critically assess how their tools mediate academic writing and thinking processes, arguing that choosing a writing tool for academic writing has ceased to be a matter of personal preference and has become one of academic ethos.

本文研究了大语言模型(LLM)辅助写作对反思性思维的影响,以阿尔伯特·博格曼(Albert Borgmann)的装置范式对唐·伊德(Don Ihde)的后现象学的现有改编为基础。学术写作可以促进我们的信念和预判的参与,使其非常有利于反思性思考。然而,生成式人工智能工具,如OpenAI的ChatGPT和Microsoft Word Copilot,可能会破坏这种有意义的参与,因为它们“剥夺”了用户在反思性写作中固有的努力。然而,如果我们不去研究每个写作应用程序倾向于用户追求的用途,我们就会有所欠缺。尽管使用相同的法学硕士学位,交叉比较显示,ChatGPT和Word Copilot的用户界面(UI)设计提供了不同的交互形式:原则上,ChatGPT的UI设计可以通过对话交互促进反思参与,促使用户形成并参与他们对给定主题的信念。相比之下,Word Copilot强调自动文档生成,这使得类似的引人入胜的使用变得不可行。作为论证的概念基础,本文扩展了Ihde写“技术”的历史,并通过讨论应用程序ui的“准物质性”以及它们作为“多稳定性”的一部分提供的支持,将其与后现象学的最新概念发展结合在一起。本文最后呼吁学术作家批判性地评估他们的工具如何调解学术写作和思维过程,认为为学术写作选择一种写作工具已经不再是个人偏好的问题,而已经成为一种学术风气。
{"title":"Commodification in academic writing: a comparative analysis of two LLM apps","authors":"Sebastian Weydner-Volkmann","doi":"10.1007/s00146-025-02446-z","DOIUrl":"10.1007/s00146-025-02446-z","url":null,"abstract":"<div><p>This paper investigates the impact of Large Language Model (LLM)-assisted writing on reflective thinking, building on existing adaptations of Albert Borgmann’s device paradigm to Don Ihde’s postphenomenology. Academic writing can facilitate engagement with our beliefs and pre-judgments, making it highly conducive to reflective thinking. However, generative AI tools, such as OpenAI’s ChatGPT and Microsoft Word Copilot, may undermine such meaningful engagement as they ‘disburden’ users of the effort inherent in reflective writing. Still, we fall short when we leave unexamined the kinds of uses each writing app inclines its users to pursue. Despite using the same LLM, a cross-comparison reveals that the user interface (UI) design of ChatGPT and Word Copilot affords distinct forms of interaction: ChatGPT’s UI design may, in principle, facilitate reflective engagement through conversational interactions, prompting users to formulate and engage with their beliefs on a given topic. In contrast, Word Copilot emphasizes automated document production, making a similar kind of engaging use unviable. As a conceptual basis for the argument, this paper extends Ihde’s history of writing ‘technics’ and brings it together with recent conceptual developments in postphenomenology by discussing the apps in terms of ‘quasi-materiality’ of application UIs and the affordances they offer as part of ‘multistabilities’. This paper concludes with a call for academic writers to critically assess how their tools mediate academic writing and thinking processes, arguing that choosing a writing tool for academic writing has ceased to be a matter of personal preference and has become one of academic ethos.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"307 - 320"},"PeriodicalIF":4.7,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02446-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of Artificial Intelligence on the “curator-as-artist”: revisiting Ventzislavov’s concept in two cases of AI-based curating 人工智能对“作为艺术家的策展人”的影响:在两个人工智能策展案例中重新审视文齐斯拉沃夫的概念
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-04 DOI: 10.1007/s00146-025-02462-z
İpek Yeğinsü

The use of Artificial Intelligence (AI) in art is increasingly widespread, and yet its effects are still far from being clear. The professional boundaries between the artist and the curator, particularly the “curator-as-artist” organizing thematic exhibitions are increasingly blurred, and AI complicates their division of labor even further. By performing theoretical analysis and case study, this paper examines how AI’s recent entry into the artistic realm is impacting the role of the curator-as-artist using the theoretical framework offered by Rossen Ventzislavov’s essay “Idle Arts: Reconsidering the Curator” (2014). Drawing upon the curatorial studies literature, it revisits the two central themes explored by Ventzislavov, i.e., the generation of artistic value, and the division labor between the artist and the curator. Using the theoretical instruments obtained from this examination, it comparatively analyzes two AI-based curatorial projects, namely the Helsinki Biennial titled “New Directions May Emerge” hosted by the Helsinki Art Museum and “Act as if you are a curator: an AI-generated exhibition” at Duke University’s Nasher Museum of Art. The study finds that the art worlds emerging around AI are collaborative and interdisciplinary, and despite the growing pressure on the figure of the individual curator in charge, AI assemblages still rely on human curatorial agency and authorship, especially for curatorial storytelling and the mediation of the negotiations among the human participants in the cooperative network.

人工智能(AI)在艺术领域的应用越来越广泛,但其影响仍远未明朗。艺术家和策展人之间的职业界限,尤其是组织主题展览的“策展人即艺术家”之间的职业界限越来越模糊,人工智能使他们的分工更加复杂。本文运用Rossen Ventzislavov的文章《闲置艺术:重新考虑策展人》(2014)提供的理论框架,通过理论分析和案例研究,探讨人工智能最近进入艺术领域是如何影响策展人作为艺术家的角色的。借助策展研究文献,它重新审视了文齐斯拉沃夫探索的两个中心主题,即艺术价值的产生,以及艺术家和策展人之间的分工。利用本次考试获得的理论工具,对比分析了两个基于人工智能的策展项目,即赫尔辛基艺术博物馆主办的赫尔辛基双年展“新方向可能出现”和杜克大学纳什艺术博物馆的“像策展人一样行动:人工智能生成的展览”。研究发现,围绕人工智能出现的艺术世界是协作和跨学科的,尽管负责的个人策展人的形象面临越来越大的压力,但人工智能集合仍然依赖于人类策展机构和作者,特别是在策展讲故事和协调合作网络中人类参与者之间的谈判方面。
{"title":"The impact of Artificial Intelligence on the “curator-as-artist”: revisiting Ventzislavov’s concept in two cases of AI-based curating","authors":"İpek Yeğinsü","doi":"10.1007/s00146-025-02462-z","DOIUrl":"10.1007/s00146-025-02462-z","url":null,"abstract":"<div><p>The use of Artificial Intelligence (AI) in art is increasingly widespread, and yet its effects are still far from being clear. The professional boundaries between the artist and the curator, particularly the “curator-as-artist” organizing thematic exhibitions are increasingly blurred, and AI complicates their division of labor even further. By performing theoretical analysis and case study, this paper examines how AI’s recent entry into the artistic realm is impacting the role of the curator-as-artist using the theoretical framework offered by Rossen Ventzislavov’s essay “Idle Arts: Reconsidering the Curator” (2014). Drawing upon the curatorial studies literature, it revisits the two central themes explored by Ventzislavov, i.e., the generation of artistic value, and the division labor between the artist and the curator. Using the theoretical instruments obtained from this examination, it comparatively analyzes two AI-based curatorial projects, namely the Helsinki Biennial titled “New Directions May Emerge” hosted by the Helsinki Art Museum and “Act as if you are a curator: an AI-generated exhibition” at Duke University’s Nasher Museum of Art. The study finds that the art worlds emerging around AI are collaborative and interdisciplinary, and despite the growing pressure on the figure of the individual curator in charge, AI assemblages still rely on human curatorial agency and authorship, especially for curatorial storytelling and the mediation of the negotiations among the human participants in the cooperative network.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"43 - 60"},"PeriodicalIF":4.7,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Between optimization and challenges: the influence of AI on managerial practices in Tunisia 优化与挑战之间:人工智能对突尼斯管理实践的影响
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-04 DOI: 10.1007/s00146-025-02442-3
Chiheb Eddine Inoubli

This study investigates the multifaceted impacts of artificial intelligence (AI) on organizational management within international companies operating in Tunisia. Using a quantitative methodology, data were collected from 270 managers and employees across diverse enterprises. The findings reveal that AI significantly enhances managerial tasks by automating repetitive processes and fostering autonomy, confirming its role as an optimization tool. Interestingly, AI positively influences interpersonal relationships, exceeding initial expectations of potential negative impacts. However, its effects on organizational culture are negligible. Ethical and legal concerns, though significant, remain moderate, reflecting growing awareness of challenges related to data protection and decision-making transparency. These results highlight the nuanced role of AI in shaping managerial practices and emphasize the need for organizations to adopt balanced strategies that leverage its benefits while addressing its ethical and relational implications. Future research should focus on longitudinal studies to explore AI’s long-term effects and examine contextual factors influencing its integration into diverse organizational settings.

本研究调查了人工智能(AI)对在突尼斯经营的国际公司组织管理的多方面影响。使用定量方法,从不同企业的270名经理和员工中收集数据。研究结果表明,人工智能通过自动化重复流程和培养自主性,显著增强了管理任务,证实了其作为优化工具的作用。有趣的是,人工智能对人际关系产生了积极的影响,超过了最初对潜在负面影响的预期。然而,它对组织文化的影响可以忽略不计。伦理和法律方面的担忧虽然严重,但仍然适度,反映出人们对数据保护和决策透明度方面的挑战日益认识。这些结果突出了人工智能在塑造管理实践中的微妙作用,并强调组织需要采取平衡的战略,在解决其道德和关系影响的同时利用其利益。未来的研究应侧重于纵向研究,以探索人工智能的长期影响,并检查影响其融入不同组织环境的环境因素。
{"title":"Between optimization and challenges: the influence of AI on managerial practices in Tunisia","authors":"Chiheb Eddine Inoubli","doi":"10.1007/s00146-025-02442-3","DOIUrl":"10.1007/s00146-025-02442-3","url":null,"abstract":"<div><p>This study investigates the multifaceted impacts of artificial intelligence (AI) on organizational management within international companies operating in Tunisia. Using a quantitative methodology, data were collected from 270 managers and employees across diverse enterprises. The findings reveal that AI significantly enhances managerial tasks by automating repetitive processes and fostering autonomy, confirming its role as an optimization tool. Interestingly, AI positively influences interpersonal relationships, exceeding initial expectations of potential negative impacts. However, its effects on organizational culture are negligible. Ethical and legal concerns, though significant, remain moderate, reflecting growing awareness of challenges related to data protection and decision-making transparency. These results highlight the nuanced role of AI in shaping managerial practices and emphasize the need for organizations to adopt balanced strategies that leverage its benefits while addressing its ethical and relational implications. Future research should focus on longitudinal studies to explore AI’s long-term effects and examine contextual factors influencing its integration into diverse organizational settings. </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"565 - 574"},"PeriodicalIF":4.7,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data, human subjects, and research ethics in transformative contexts 变革背景下的数据、人类受试者和研究伦理
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-03 DOI: 10.1007/s00146-025-02457-w
Ngai Keung Chan, Chi Kwok
{"title":"Data, human subjects, and research ethics in transformative contexts","authors":"Ngai Keung Chan,&nbsp;Chi Kwok","doi":"10.1007/s00146-025-02457-w","DOIUrl":"10.1007/s00146-025-02457-w","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"473 - 474"},"PeriodicalIF":4.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1