首页 > 最新文献

AI & Society最新文献

英文 中文
Boundary-making practices: LLMs and an artifactual production of objectivity 边界制定实践:法学硕士和客观性的人为生产
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-09 DOI: 10.1007/s00146-025-02409-4
Mihye An

This theoretical article moves beyond representationalist conceptions of objectivity to examine deeper challenges posed by LLMs in collective knowledge production. While LLMs are often criticized for bias, hallucination, and generating “bullshit” that misrepresents reality, such critiques are too narrow to account for how LLMs transform the sociotechnical practices of knowledge-making. Drawing on Barad’s performative account, we argue that objectivity should be understood not as fixed representations of the world but as ongoing ethical and epistemological boundaries emerging through complex intra-acting agencies. We offer a relational analysis of LLM production, framing it as a series of transformations between technical artifacts: from Internet to dataset, dataset to base model, and base model to instruction-tuned model. Each transformation introduces exclusions that enact epistemological, computational, and discursive boundaries. We conclude by proposing “artifactual literacy,” a critical awareness of how LLMs function as contingent artifacts mediating the evolving boundaries of objective knowledge.

这篇理论文章超越了客观的表征主义概念,研究了法学硕士在集体知识生产中带来的更深层次的挑战。虽然法学硕士经常被批评有偏见、产生幻觉,以及产生歪曲现实的“废话”,但这些批评过于狭隘,无法解释法学硕士如何改变知识创造的社会技术实践。根据Barad的行为描述,我们认为客观性不应该被理解为世界的固定表征,而应该被理解为通过复杂的内部行为机构出现的持续的伦理和认识论边界。我们提供了LLM产品的关系分析,将其定义为技术工件之间的一系列转换:从互联网到数据集,数据集到基础模型,以及基础模型到指令调优模型。每个转换引入排除制定认识论,计算和话语边界。最后,我们提出了“人工素养”,这是一种批判性的意识,即法学硕士如何作为偶然的人工制品,调解客观知识的不断发展的边界。
{"title":"Boundary-making practices: LLMs and an artifactual production of objectivity","authors":"Mihye An","doi":"10.1007/s00146-025-02409-4","DOIUrl":"10.1007/s00146-025-02409-4","url":null,"abstract":"<div><p>This theoretical article moves beyond representationalist conceptions of objectivity to examine deeper challenges posed by LLMs in collective knowledge production. While LLMs are often criticized for bias, hallucination, and generating “bullshit” that misrepresents reality, such critiques are too narrow to account for how LLMs transform the sociotechnical practices of knowledge-making. Drawing on Barad’s performative account, we argue that objectivity should be understood not as fixed representations of the world but as ongoing ethical and epistemological boundaries emerging through complex intra-acting agencies. We offer a relational analysis of LLM production, framing it as a series of transformations between technical artifacts: from Internet to dataset, dataset to base model, and base model to instruction-tuned model. Each transformation introduces exclusions that enact epistemological, computational, and discursive boundaries. We conclude by proposing “artifactual literacy,” a critical awareness of how LLMs function as contingent artifacts mediating the evolving boundaries of objective knowledge.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5967 - 5979"},"PeriodicalIF":4.7,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When autonomy breaks: the hidden existential risk of AI 当自主性被打破:人工智能隐藏的生存风险
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-07 DOI: 10.1007/s00146-025-02397-5
Joshua Krook

AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, critical thinking or even creativity. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even creativity in an AGI world. The biggest threat to humanity is, therefore, not that machines will become more like humans, but that humans will become more like machines.

人工智能的风险通常围绕着对人类的物理威胁、失控或导致人类灭绝的意外错误。然而,我与逐渐剥夺权力的论点一致,认为在人类自主性缓慢而不可逆转的下降中存在一种未被充分认识的风险。随着人工智能开始在生活的各个领域超越人类,将达到一个临界点,依靠人类的决策、批判性思维甚至创造力不再有意义。随之而来的可能是一个逐渐去技能化的过程,在这个过程中,我们会失去目前认为理所当然的技能。传统上,人们认为人工智能将随着时间的推移获得人类的技能,而这些技能是人类天生的、不可改变的。相比之下,我认为,在人工智能的世界里,人类可能会失去批判性思维、决策甚至创造力等技能。因此,人类面临的最大威胁不是机器变得越来越像人类,而是人类变得越来越像机器。
{"title":"When autonomy breaks: the hidden existential risk of AI","authors":"Joshua Krook","doi":"10.1007/s00146-025-02397-5","DOIUrl":"10.1007/s00146-025-02397-5","url":null,"abstract":"<div><p>AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, critical thinking or even creativity. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even creativity in an AGI world. The biggest threat to humanity is, therefore, not that machines will become more like humans, but that humans will become more like machines.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6011 - 6024"},"PeriodicalIF":4.7,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI, journalism, and critical AI literacy: exploring journalists’ perspectives on AI and responsible reporting 人工智能、新闻和批判性人工智能素养:探索记者对人工智能和负责任报道的看法
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-06 DOI: 10.1007/s00146-025-02407-6
Tomasz Hollanek, Dorian Peters, Eleanor Drage, Raphael Hernandes

This study explores the perspectives of media professionals on the concerns, needs, and responsibilities related to fostering AI literacy among journalists. We report on findings from two workshops with journalists (based in the USA, the UK, China, and India), as well as representatives of civil society organizations and academic specialists in media and AI literacy. Through a reflexive qualitative analysis of data collected during the workshops, we examine the obstacles to AI literacy development among journalists and the quality of resources currently available to them for learning about AI and AI ethics. We highlight the most pressing needs in AI-focused education for journalists and surface participants’ ideas for potential solutions, including an authoritative online compendium on AI and journalism and a database of diverse expert voices. We point to the areas where relevant stakeholders should direct their efforts to support journalists in navigating AI responsibly and critically.

本研究探讨了媒体专业人员对培养记者人工智能素养的关注、需求和责任的看法。我们报告了与记者(美国、英国、中国和印度)、民间社会组织代表以及媒体和人工智能素养方面的学术专家进行的两次研讨会的调查结果。通过对研讨会期间收集的数据进行反思性定性分析,我们研究了记者中人工智能素养发展的障碍,以及目前可供他们学习人工智能和人工智能伦理的资源的质量。我们强调了面向记者的以人工智能为重点的教育中最迫切的需求,并展示了参与者对潜在解决方案的想法,包括一个关于人工智能和新闻的权威在线纲要,以及一个由不同专家声音组成的数据库。我们指出了相关利益相关者应该努力支持记者负责任地和批判性地利用人工智能的领域。
{"title":"AI, journalism, and critical AI literacy: exploring journalists’ perspectives on AI and responsible reporting","authors":"Tomasz Hollanek,&nbsp;Dorian Peters,&nbsp;Eleanor Drage,&nbsp;Raphael Hernandes","doi":"10.1007/s00146-025-02407-6","DOIUrl":"10.1007/s00146-025-02407-6","url":null,"abstract":"<div><p>This study explores the perspectives of media professionals on the concerns, needs, and responsibilities related to fostering AI literacy among journalists. We report on findings from two workshops with journalists (based in the USA, the UK, China, and India), as well as representatives of civil society organizations and academic specialists in media and AI literacy. Through a reflexive qualitative analysis of data collected during the workshops, we examine the obstacles to AI literacy development among journalists and the quality of resources currently available to them for learning about AI and AI ethics. We highlight the most pressing needs in AI-focused education for journalists and surface participants’ ideas for potential solutions, including an authoritative online compendium on AI and journalism and a database of diverse expert voices. We point to the areas where relevant stakeholders should direct their efforts to support journalists in navigating AI responsibly and critically. </p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6393 - 6405"},"PeriodicalIF":4.7,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02407-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human centered systems start with social dynamics and arrive at ontology 以人为中心的系统从社会动力学出发,到达本体论
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-04 DOI: 10.1007/s00146-025-02396-6
Brenda O’Neill, Larry Stapleton, Peter Carew

The purpose of this research is to support and nurture tacit knowledge whilst simultaneously leading to the development of machine-based intelligent systems which incorporate machine readable knowledge for the benefit of society. This paper starts with an introduction to the persistent power struggle between human and technology and shines a light on Professor Michael Cooley’s involvement with the Lucas Plan in the 1970s and his PhD work which focused on the transition from manual draftsmanship to Computer Aided Design in engineering. A research lab is identified as a ‘complex adaptive system’ and forms the basis of a longitudinal case study on the Human Centered bottom-up approach to digitisation of cultural heritage. Components required to support and nurture the growth of a Participation Action Research lab are identified. The novel ‘ENRICHER’ method embodies human centeredness and is operationalized, tested, evaluated and findings discussed. Examples of emergence are also discussed. A metric of the ENRICHER method initially identified where the lab did not fully meet all the methods 8 points. Subsequent actions adjusted the holonic lens focus to metadata and the ongoing work on the creation of a cataloging tool for the librarians. The use of XML technologies integrates the work into a larger model of intelligence. It positions the work on the semantic web technology stack and opens up the pathway to ontology generation and development and management of large language models. The ENRICHER method is a way of developing human–machine symbiotics that also incorporate AI e.g. transcription, metadata generation.

本研究的目的是支持和培养隐性知识,同时引导基于机器的智能系统的发展,该系统将机器可读知识纳入社会利益。本文首先介绍了人类与技术之间持续不断的权力斗争,并重点介绍了Michael Cooley教授在20世纪70年代参与的卢卡斯计划(Lucas Plan),以及他的博士论文,该论文的重点是工程领域从手工制图到计算机辅助设计的过渡。研究实验室被确定为一个“复杂的适应系统”,并形成了以人为中心的自下而上的文化遗产数字化方法纵向案例研究的基础。确定了支持和培育参与行动研究实验室成长所需的组成部分。新颖的“ENRICHER”方法体现了以人为本,并进行了操作、测试、评估和结果讨论。还讨论了涌现的例子。一个指标的ENRICHER方法最初确定实验室没有完全满足所有方法的8点。随后的行动将全息透镜的焦点调整到元数据和正在进行的为图书管理员创建编目工具的工作上。XML技术的使用将工作集成到一个更大的智能模型中。它将工作定位在语义web技术堆栈上,为本体生成和大型语言模型的开发和管理开辟了途径。ENRICHER方法是一种开发人机共生的方法,也包括人工智能,例如转录,元数据生成。
{"title":"Human centered systems start with social dynamics and arrive at ontology","authors":"Brenda O’Neill,&nbsp;Larry Stapleton,&nbsp;Peter Carew","doi":"10.1007/s00146-025-02396-6","DOIUrl":"10.1007/s00146-025-02396-6","url":null,"abstract":"<div><p>The purpose of this research is to support and nurture tacit knowledge whilst simultaneously leading to the development of machine-based intelligent systems which incorporate machine readable knowledge for the benefit of society. This paper starts with an introduction to the persistent power struggle between human and technology and shines a light on Professor Michael Cooley’s involvement with the Lucas Plan in the 1970s and his PhD work which focused on the transition from manual draftsmanship to Computer Aided Design in engineering. A research lab is identified as a ‘complex adaptive system’ and forms the basis of a longitudinal case study on the Human Centered bottom-up approach to digitisation of cultural heritage. Components required to support and nurture the growth of a Participation Action Research lab are identified. The novel ‘ENRICHER’ method embodies human centeredness and is operationalized, tested, evaluated and findings discussed. Examples of emergence are also discussed. A metric of the ENRICHER method initially identified where the lab did not fully meet all the methods 8 points. Subsequent actions adjusted the holonic lens focus to metadata and the ongoing work on the creation of a cataloging tool for the librarians. The use of XML technologies integrates the work into a larger model of intelligence. It positions the work on the semantic web technology stack and opens up the pathway to ontology generation and development and management of large language models. The ENRICHER method is a way of developing human–machine symbiotics that also incorporate AI e.g. transcription, metadata generation.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5981 - 5998"},"PeriodicalIF":4.7,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02396-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spaces for democracy with generative artificial intelligence: public architecture at stake 具有生成式人工智能的民主空间:岌岌可危的公共建筑
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-30 DOI: 10.1007/s00146-025-02353-3
Ingrid Campo-Ruiz
<div><p>Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should
城市空间是民主的重要基础设施,促进民主参与,如会议、讨论和抗议。人工智能(AI)系统可以通过城市空间影响民主,例如,通过侵犯数据隐私,阻碍政治平等和参与,或操纵有关地点的信息。本研究根据chatgpt - 40产生的输出,探索促进民主参与的城市场所。这项研究超越了人工智能和民主作为传播错误信息和假新闻形式的主要讨论框架。相反,它提供了一个创新的框架,将建筑空间作为民主的基础设施,以及生成式人工智能工具提供可能影响数百万人的细致入微的民主观点的方式相结合。本文从建筑的角度提出了一个理解民主人工智能的新概念框架。在瑞典斯德哥尔摩的第一个案例研究中,人工智能输出后来与GIS地图和理论框架相结合。然后,研究分析了西班牙马德里和比利时布鲁塞尔的结果。这一分析对人工智能获得的产出、促进民主参与的地方和被忽视的地方以及随之而来的后果提供了更深入的见解。结果表明,斯德哥尔摩通过chatgpt - 40获得的城市民主参与空间主要由政府机构和非政府组织组成,在市中心的公共建筑中进行代议制或协商民主和个人教育。ChatGPT-40获得的结果几乎不能反映公共开放空间、公园或路线。它们还优先考虑有组织的参与,而不是自发的参与,不反映示威等非结构化事件,也不反映政党或工会等强大行动者。chatgpt - 40为马德里和布鲁塞尔列出的地方主要是私人空间,如办公室,用于举办政治活动的组织。虽然城市为民主参与提供了广泛而复杂的场所,但通过人工智能获得的产出可能会缩小用户对其真正机会的看法,同时由于没有让强大的代理人足够显眼而无法对其行为负责,从而使其永久化。总之,城市空间是民主的基本基础设施,而人工智能输出可以成为理解大量互动的有效起点。这些产出应与其他形式的知识相辅相成,以产生一个更全面的框架,适应在民主背景下发展人工智能的现实。城市空间应作为共享空间和社会充分发展多种形式民主的资产加以保护。民主和城市空间相互影响,并受到包括人工智能在内的不同行动者的压力。因此,应该监控人工智能系统,通过城市空间增强民主价值。
{"title":"Spaces for democracy with generative artificial intelligence: public architecture at stake","authors":"Ingrid Campo-Ruiz","doi":"10.1007/s00146-025-02353-3","DOIUrl":"10.1007/s00146-025-02353-3","url":null,"abstract":"&lt;div&gt;&lt;p&gt;Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5951 - 5966"},"PeriodicalIF":4.7,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02353-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The roles of cooperative attitude, personal innovativeness, and anxiety in AI adoption within the design community 合作态度、个人创新和焦虑在设计界采用人工智能中的作用
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-26 DOI: 10.1007/s00146-025-02390-y
Jo-Yu Kuo, Tzu-Hsuan Wang

The integration of AI technology into design practices has sparked debate within the design community, particularly regarding its behavioral and process-oriented impacts. While existing studies predominantly rely on qualitative methods such as interviews and observations, these approaches may fall short in uncovering the intricate, cross-disciplinary relationships essential for a holistic understanding of AI’s societal implications. This study introduces an acceptance model tailored to designers, based on the Unified Theory of Acceptance and Use of Technology (UTAUT). The proposed model emphasizes the increasing role of online cooperation and affective drivers, including personal innovativeness and anxiety toward AI-integrated design tools. By analyzing 292 valid responses through structural equation modeling, we found that social influence and facilitating conditions are strongly correlated with positive attitudes toward cooperation, while performance expectancy emerged as the key driver for AI adoption in design. Notably, experienced professionals reported greater access to support and resources for AI integration. Although AI-induced anxiety affects certain aspects of technology adoption, it does not significantly diminish performance expectancy. In addition, the study discusses gender differences in technology acceptance and the influence of underlying geographic factors. These insights contribute to the broader discourse on the societal implications of AI, offering practical guidance for the development of AI-integrated design programs in educational and professional contexts.

人工智能技术与设计实践的整合引发了设计界的争论,特别是关于其行为和面向过程的影响。虽然现有的研究主要依赖于访谈和观察等定性方法,但这些方法可能无法揭示复杂的跨学科关系,这对于全面理解人工智能的社会影响至关重要。本研究在技术接受与使用统一理论(UTAUT)的基础上,提出了一个为设计人员量身定制的接受模型。该模型强调了在线合作和情感驱动因素的日益重要的作用,包括个人创新和对人工智能集成设计工具的焦虑。通过结构方程模型分析292个有效回答,我们发现社会影响力和促进条件与积极的合作态度密切相关,而绩效预期成为设计中采用人工智能的关键驱动因素。值得注意的是,经验丰富的专业人士报告说,他们更容易获得人工智能集成的支持和资源。尽管人工智能引发的焦虑会影响技术采用的某些方面,但它不会显著降低绩效预期。此外,研究还讨论了技术接受的性别差异及其潜在地理因素的影响。这些见解有助于更广泛地讨论人工智能的社会影响,为在教育和专业背景下开发人工智能集成设计方案提供实践指导。
{"title":"The roles of cooperative attitude, personal innovativeness, and anxiety in AI adoption within the design community","authors":"Jo-Yu Kuo,&nbsp;Tzu-Hsuan Wang","doi":"10.1007/s00146-025-02390-y","DOIUrl":"10.1007/s00146-025-02390-y","url":null,"abstract":"<div><p>The integration of AI technology into design practices has sparked debate within the design community, particularly regarding its behavioral and process-oriented impacts. While existing studies predominantly rely on qualitative methods such as interviews and observations, these approaches may fall short in uncovering the intricate, cross-disciplinary relationships essential for a holistic understanding of AI’s societal implications. This study introduces an acceptance model tailored to designers, based on the Unified Theory of Acceptance and Use of Technology (UTAUT). The proposed model emphasizes the increasing role of online cooperation and affective drivers, including personal innovativeness and anxiety toward AI-integrated design tools. By analyzing 292 valid responses through structural equation modeling, we found that social influence and facilitating conditions are strongly correlated with positive attitudes toward cooperation, while performance expectancy emerged as the key driver for AI adoption in design. Notably, experienced professionals reported greater access to support and resources for AI integration. Although AI-induced anxiety affects certain aspects of technology adoption, it does not significantly diminish performance expectancy. In addition, the study discusses gender differences in technology acceptance and the influence of underlying geographic factors. These insights contribute to the broader discourse on the societal implications of AI, offering practical guidance for the development of AI-integrated design programs in educational and professional contexts.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6339 - 6355"},"PeriodicalIF":4.7,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical and epistemic implications of artificial intelligence in medicine: a stakeholder-based assessment 医学中人工智能的伦理和认知意义:基于利益相关者的评估
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-25 DOI: 10.1007/s00146-025-02398-4
Jonathan Adams

As artificial intelligence (AI) technologies become increasingly embedded in high-stakes fields such as healthcare, ethical and epistemic considerations raise the need for evaluative frameworks to assess their societal impacts across multiple dimensions. This paper uses the ethical-epistemic matrix (EEM), a structured framework that integrates both ethical and epistemic principles, to evaluate medical AI applications more comprehensively. Building on the ethical principles of well-being, autonomy, justice, and explicability, the matrix introduces epistemic principles—accuracy, consistency, relevance, and instrumental efficacy—that assess AI’s role in knowledge production. This dual approach enables a nuanced assessment that reflects the diverse perspectives of stakeholders within the medical field—patients, clinicians, developers, the public, and health policy-makers—who assess AI systems differently based on distinct interests and epistemic goals. Although the EEM has been outlined conceptually before, no published research paper has yet used it explore the ethical and epistemic implications arising in its key intended application domain of AI in medicine. Through a systematic demonstration of the EEM as applied to medical AI, this paper argues that it encourages a broader understanding of AI’s implications and serves as a valuable methodological tool for evaluating future uses. This is illustrated with the case study of AI systems in sleep apnea detection, where the EEM highlights the ethical trade-offs and epistemic challenges that different stakeholders may perceive, which can be made more concrete if the tool is embedded in future technical projects.

随着人工智能(AI)技术越来越多地嵌入医疗保健等高风险领域,伦理和认知方面的考虑提高了对评估框架的需求,以评估其在多个维度上的社会影响。本文使用伦理-认知矩阵(EEM),这是一个整合伦理和认知原则的结构化框架,以更全面地评估医疗人工智能应用。该矩阵以幸福、自主、公正和可解释性等伦理原则为基础,引入了认识论原则——准确性、一致性、相关性和工具有效性——来评估人工智能在知识生产中的作用。这种双重方法能够进行细致入微的评估,反映医疗领域内利益相关者的不同观点——患者、临床医生、开发人员、公众和卫生政策制定者——他们根据不同的兴趣和认知目标对人工智能系统进行不同的评估。尽管EEM之前已经在概念上进行了概述,但尚未有发表的研究论文使用它来探索人工智能在医学中的关键预期应用领域中产生的伦理和认识论含义。通过系统地演示EEM在医疗人工智能中的应用,本文认为它鼓励人们更广泛地理解人工智能的含义,并作为评估未来用途的有价值的方法工具。人工智能系统在睡眠呼吸暂停检测中的案例研究说明了这一点,其中EEM强调了不同利益相关者可能感知到的道德权衡和认知挑战,如果该工具嵌入到未来的技术项目中,可以使其更加具体。
{"title":"Ethical and epistemic implications of artificial intelligence in medicine: a stakeholder-based assessment","authors":"Jonathan Adams","doi":"10.1007/s00146-025-02398-4","DOIUrl":"10.1007/s00146-025-02398-4","url":null,"abstract":"<div><p>As artificial intelligence (AI) technologies become increasingly embedded in high-stakes fields such as healthcare, ethical and epistemic considerations raise the need for evaluative frameworks to assess their societal impacts across multiple dimensions. This paper uses the ethical-epistemic matrix (EEM), a structured framework that integrates both ethical and epistemic principles, to evaluate medical AI applications more comprehensively. Building on the ethical principles of well-being, autonomy, justice, and explicability, the matrix introduces epistemic principles—accuracy, consistency, relevance, and instrumental efficacy—that assess AI’s role in knowledge production. This dual approach enables a nuanced assessment that reflects the diverse perspectives of stakeholders within the medical field—patients, clinicians, developers, the public, and health policy-makers—who assess AI systems differently based on distinct interests and epistemic goals. Although the EEM has been outlined conceptually before, no published research paper has yet used it explore the ethical and epistemic implications arising in its key intended application domain of AI in medicine. Through a systematic demonstration of the EEM as applied to medical AI, this paper argues that it encourages a broader understanding of AI’s implications and serves as a valuable methodological tool for evaluating future uses. This is illustrated with the case study of AI systems in sleep apnea detection, where the EEM highlights the ethical trade-offs and epistemic challenges that different stakeholders may perceive, which can be made more concrete if the tool is embedded in future technical projects.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5935 - 5950"},"PeriodicalIF":4.7,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02398-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Excuses, excuses: moral agency and the professional identity of AI developers 借口,借口:人工智能开发者的道德能动性和职业身份
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-24 DOI: 10.1007/s00146-025-02388-6
Tricia Griffin, Brian P. Green, Jos V. M. Welie

Artificial intelligence developers, machine learning engineers, and data scientists occupy a contradictory role in the modern marketplace. While they are central to the business and science of AI, they are marginalized as moral agents. Consequently, the marketplace has cultivated environments in which developers can be unthinking in their own roles and responsibilities, while at the same time tasking them with creating “thinking machines.” The central aim of this article is to show that this state of affairs is morally unjustifiable. To accomplish this, we draw from Arthur Isak Applbaum’s work on adversary roles and Alasdair MacIntyre’s framework for professional moral agency to establish the context dependencies for a “good” AI developer. We then draw from available studies that have engaged developers in questions about their moral agency and place them in conversation with Dennis Thompson and Helen Nissenbaum about the excuses associated with “the problem of many hands,” a concept that has beguiled accountability in the AI community for decades. We then return to MacIntyre’s framework to provide evidence from the same set of studies that AI developers do understand themselves as being responsible for more than just the role, yet they lack a robust community to whom they can submit their choices for ethical scrutiny and work environments that are often non-conducive to their moral actualization. We conclude with specific recommendations for bringing developers’ moral agency more fully into the discourse about AI ethics.

人工智能开发人员、机器学习工程师和数据科学家在现代市场中扮演着相互矛盾的角色。虽然他们是人工智能商业和科学的核心,但作为道德代理人,他们被边缘化了。因此,市场培养了一种环境,在这种环境中,开发人员可以不考虑自己的角色和责任,同时给他们分配创造“思考机器”的任务。本文的中心目的是表明这种事态在道德上是不合理的。为了实现这一点,我们借鉴了Arthur Isak Applbaum关于对手角色的研究和Alasdair MacIntyre关于职业道德代理的框架,为“优秀”的AI开发者建立了上下文依赖关系。然后,我们从现有的研究中得出结论,这些研究让开发人员参与到他们的道德代理问题中,并让他们与丹尼斯·汤普森和海伦·尼森鲍姆讨论与“多手问题”相关的借口,这个概念几十年来一直困扰着人工智能社区的问责制。然后,我们回到麦金太尔的框架,从同一组研究中提供证据,证明人工智能开发人员确实明白自己不仅仅是一个角色,他们还需要承担更多的责任,但他们缺乏一个强大的社区,他们可以向这个社区提交自己的选择,接受道德审查,而工作环境往往不利于他们的道德实现。最后,我们提出了一些具体建议,以便将开发者的道德能动性更充分地纳入关于人工智能伦理的讨论中。
{"title":"Excuses, excuses: moral agency and the professional identity of AI developers","authors":"Tricia Griffin,&nbsp;Brian P. Green,&nbsp;Jos V. M. Welie","doi":"10.1007/s00146-025-02388-6","DOIUrl":"10.1007/s00146-025-02388-6","url":null,"abstract":"<div><p>Artificial intelligence developers, machine learning engineers, and data scientists occupy a contradictory role in the modern marketplace. While they are central to the business and science of AI, they are marginalized as moral agents. Consequently, the marketplace has cultivated environments in which developers can be unthinking in their own roles and responsibilities, while at the same time tasking them with creating “thinking machines.” The central aim of this article is to show that this state of affairs is morally unjustifiable. To accomplish this, we draw from Arthur Isak Applbaum’s work on adversary roles and Alasdair MacIntyre’s framework for professional moral agency to establish the context dependencies for a “good” AI developer. We then draw from available studies that have engaged developers in questions about their moral agency and place them in conversation with Dennis Thompson and Helen Nissenbaum about the excuses associated with “the problem of many hands,” a concept that has beguiled accountability in the AI community for decades. We then return to MacIntyre’s framework to provide evidence from the same set of studies that AI developers do understand themselves as being responsible for more than just the role, yet they lack a robust community to whom they can submit their choices for ethical scrutiny and work environments that are often non-conducive to their moral actualization. We conclude with specific recommendations for bringing developers’ moral agency more fully into the discourse about AI ethics.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6327 - 6338"},"PeriodicalIF":4.7,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02388-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recursive InPainting (RIP): how much information is lost under recursive inferences? 递归InPainting (RIP):在递归推理下丢失了多少信息?
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-20 DOI: 10.1007/s00146-025-02351-5
Javier Conde, Miguel Gonzalez, Gonzalo Martínez, Fernando Moral, Elena Merino-Gomez, Pedro Reviriego

The rapid adoption of generative artificial intelligence (AI) is accelerating content creation and modification. For example, variations of a given content, be it text or images, can be created almost instantly and at a low cost. This will soon lead to the majority of text and images being created directly by AI models or by humans assisted by AI. This poses new risks; for example, AI-generated content may be used to train newer AI models and degrade their performance, or information may be lost in the transformations made by AI which could occur when the same content is processed over and over again by AI tools. An example of AI image modifications is inpainting in which an AI model completes missing fragments of an image. The incorporation of inpainting tools into photo editing programs promotes their adoption and encourages their recursive use to modify images. Inpainting can be applied recursively, starting from an image, removing some parts, applying inpainting to reconstruct the image, revising it, and then starting the inpainting process again on the reconstructed image, etc. This paper presents an empirical evaluation of recursive inpainting when using one of the most widely used image models: Stable Diffusion. The inpainting process is applied by randomly selecting a fragment of the image, reconstructing it, selecting another fragment, and repeating the process a predefined number of iterations. The images used in the experiments are taken from a publicly available art data set and correspond to different styles and historical periods. Additionally, photographs are also evaluated as a reference. The modified images are compared with the original ones by both using quantitative metrics and performing a qualitative analysis. The results show that recursive inpainting in some cases modifies the image so that it still resembles the original one while in others leads to image degeneration, so ending with a non-meaningful image. The outcome of the recursive inpainting process depends on several factors, such as the type of image, the size of the inpainting masks, and the number of iterations. The results of our evaluation illustrate how information can be lost due to successive AI transformations. The evaluation of additional models, images, and inpainting sequences is needed to confirm whether this observation is generally applicable or if it occurs only in some models and settings.

生成式人工智能(AI)的迅速采用正在加速内容的创造和修改。例如,给定内容的变体,无论是文本还是图像,几乎可以立即以低成本创建。这将很快导致大多数文本和图像由人工智能模型直接创建,或者由人工智能辅助的人类创建。这带来了新的风险;例如,人工智能生成的内容可能用于训练较新的人工智能模型并降低其性能,或者当人工智能工具反复处理相同的内容时,人工智能所做的转换可能会丢失信息。人工智能图像修改的一个例子是绘制,其中人工智能模型完成图像的缺失片段。将绘画工具整合到照片编辑程序中,促进了它们的采用,并鼓励它们的递归使用来修改图像。修复可以递归地应用,从图像开始,删除某些部分,应用修复重建图像,修改它,然后在重建图像上再次启动修复过程,等等。本文在使用最广泛使用的图像模型之一——稳定扩散模型时,对递归补漆进行了经验评价。绘制过程是通过随机选择图像的一个片段,重建它,选择另一个片段,并重复该过程的预定义迭代次数来应用的。实验中使用的图像取自一个公开的艺术数据集,对应于不同的风格和历史时期。此外,照片也被评估为参考。通过定量度量和定性分析,将改进后的图像与原始图像进行比较。结果表明,递归补图在某些情况下会对图像进行修改,使其与原始图像相似,而在另一些情况下会导致图像退化,从而以无意义的图像结束。递归绘制过程的结果取决于几个因素,比如图像的类型、绘制蒙版的大小和迭代的次数。我们的评估结果说明了信息是如何由于连续的人工智能转换而丢失的。需要对其他模型、图像和喷漆序列进行评估,以确认这一观察结果是否普遍适用,还是仅发生在某些模型和设置中。
{"title":"Recursive InPainting (RIP): how much information is lost under recursive inferences?","authors":"Javier Conde,&nbsp;Miguel Gonzalez,&nbsp;Gonzalo Martínez,&nbsp;Fernando Moral,&nbsp;Elena Merino-Gomez,&nbsp;Pedro Reviriego","doi":"10.1007/s00146-025-02351-5","DOIUrl":"10.1007/s00146-025-02351-5","url":null,"abstract":"<div><p>The rapid adoption of generative artificial intelligence (AI) is accelerating content creation and modification. For example, variations of a given content, be it text or images, can be created almost instantly and at a low cost. This will soon lead to the majority of text and images being created directly by AI models or by humans assisted by AI. This poses new risks; for example, AI-generated content may be used to train newer AI models and degrade their performance, or information may be lost in the transformations made by AI which could occur when the same content is processed over and over again by AI tools. An example of AI image modifications is inpainting in which an AI model completes missing fragments of an image. The incorporation of inpainting tools into photo editing programs promotes their adoption and encourages their recursive use to modify images. Inpainting can be applied recursively, starting from an image, removing some parts, applying inpainting to reconstruct the image, revising it, and then starting the inpainting process again on the reconstructed image, etc. This paper presents an empirical evaluation of recursive inpainting when using one of the most widely used image models: Stable Diffusion. The inpainting process is applied by randomly selecting a fragment of the image, reconstructing it, selecting another fragment, and repeating the process a predefined number of iterations. The images used in the experiments are taken from a publicly available art data set and correspond to different styles and historical periods. Additionally, photographs are also evaluated as a reference. The modified images are compared with the original ones by both using quantitative metrics and performing a qualitative analysis. The results show that recursive inpainting in some cases modifies the image so that it still resembles the original one while in others leads to image degeneration, so ending with a non-meaningful image. The outcome of the recursive inpainting process depends on several factors, such as the type of image, the size of the inpainting masks, and the number of iterations. The results of our evaluation illustrate how information can be lost due to successive AI transformations. The evaluation of additional models, images, and inpainting sequences is needed to confirm whether this observation is generally applicable or if it occurs only in some models and settings.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"6309 - 6325"},"PeriodicalIF":4.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02351-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Art Beyond Humanity: exploring the human through machine creation 超越人性的艺术:通过机器创作探索人类
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-17 DOI: 10.1007/s00146-025-02376-w
Melanie Wilmink

This text interrogates the theory and practice of AI as an artistic material through close analysis of an exhibition themed around human health and disability. Highlighting the project Art Beyond Humanity: AI x Human Collaborations (2023), this analysis explores three key questions regarding the (1) social and ethical impact of GenAI tools on artistic labor, (2) the relational process of co-creation between humans and algorithms, and (3) the aesthetic potential of AI as a creative medium. Led by curator Melanie Wilmink, mathematician Eric Dolores Cuenca, and artist/health advocate Justus Harris, alongside students from Yonsei and Woosong Universities in South Korea, the project used research-creation methodologies to provoke ethical questions about copyright, access to corporately controlled systems, and artistic tactics for production. In this paper, concerns about GenAI impacts on artistic labor are contextualized by art historical precedent and discussion about the limitations of algorithmic creativity. Subsequent sections articulate GenAI images as an assemblage of human and machine perception that embed bias but also hold the potential to draw attention to social issues, while outlining the techniques that artists can use to manipulate GenAI outputs. This can occur through code and database training, but also through the shaping of text prompts since GenAI images are imbricated with language. By exploring how GenAI produces images—as both material and conceptual—artists have the power to generate critical discourse about the social and the ethical impacts of these new technologies in the world.

本文通过对一个以人类健康和残疾为主题的展览的仔细分析,对人工智能作为一种艺术材料的理论和实践进行了探讨。该分析突出了“超越人类的艺术:人工智能x人类合作”(2023)项目,探讨了以下三个关键问题:(1)GenAI工具对艺术劳动的社会和伦理影响,(2)人类与算法之间共同创造的关系过程,以及(3)人工智能作为一种创作媒介的美学潜力。在策展人Melanie Wilmink、数学家Eric Dolores Cuenca和艺术家/健康倡导者Justus Harris的带领下,该项目与韩国延世大学和Woosong大学的学生一起,使用研究创造的方法来引发有关版权、企业控制系统访问和艺术制作策略的伦理问题。在本文中,对GenAI对艺术劳动的影响的关注是由艺术史先例和对算法创造力局限性的讨论构成的。接下来的部分将GenAI图像阐述为人类和机器感知的集合,这些图像嵌入了偏见,但也有可能引起人们对社会问题的关注,同时概述了艺术家可以用来操纵GenAI输出的技术。这可以通过代码和数据库训练来实现,也可以通过塑造文本提示来实现,因为GenAI图像是由语言组成的。通过探索GenAI如何产生图像——作为材料和概念——艺术家们有能力产生关于这些新技术在世界上的社会和伦理影响的批判性话语。
{"title":"Art Beyond Humanity: exploring the human through machine creation","authors":"Melanie Wilmink","doi":"10.1007/s00146-025-02376-w","DOIUrl":"10.1007/s00146-025-02376-w","url":null,"abstract":"<div><p>This text interrogates the theory and practice of AI as an artistic material through close analysis of an exhibition themed around human health and disability. Highlighting the project <i>Art Beyond Humanity: AI x Human Collaborations</i> (2023), this analysis explores three key questions regarding the (1) social and ethical impact of GenAI tools on artistic labor, (2) the relational process of co-creation between humans and algorithms, and (3) the aesthetic potential of AI as a creative medium. Led by curator Melanie Wilmink, mathematician Eric Dolores Cuenca, and artist/health advocate Justus Harris, alongside students from Yonsei and Woosong Universities in South Korea, the project used research-creation methodologies to provoke ethical questions about copyright, access to corporately controlled systems, and artistic tactics for production. In this paper, concerns about GenAI impacts on artistic labor are contextualized by art historical precedent and discussion about the limitations of algorithmic creativity. Subsequent sections articulate GenAI images as an assemblage of human and machine perception that embed bias but also hold the potential to draw attention to social issues, while outlining the techniques that artists can use to manipulate GenAI outputs. This can occur through code and database training, but also through the shaping of text prompts since GenAI images are imbricated with language. By exploring how GenAI produces images—as both material and conceptual—artists have the power to generate critical discourse about the social and the ethical impacts of these new technologies in the world.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 8","pages":"5919 - 5934"},"PeriodicalIF":4.7,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1