首页 > 最新文献

AI & Society最新文献

英文 中文
An absurdist ethics of AI: applying Camus’ concepts of rebellion and dignity to the challenges posed by disruptive technoscience 人工智能的荒诞伦理:将加缪的反叛和尊严概念应用于颠覆性科技带来的挑战
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-15 DOI: 10.1007/s00146-025-02482-9
Marlon Valentijn Kruizinga, Hub Zwart, Valerie Frissen

This article proposes a new, Camusian approach to analyzing and navigating ethical dilemmas in relation to Artificial Intelligence (AI) and, by extension, to other disruptive technoscience. The article takes as its point of departure the Unified Framework of Five Principles for AI in Society, as advanced by Floridi and Cowls (2021), which offers a comprehensive and cohesive framework of the many abstract values and principles brought up in AI ethics discourse. Using a case-study approach, which focuses on the principle of accountability in AI, we demonstrate that, even following an exhaustive systematization of abstract principles, ethical dilemmas still arise whenever we consider applications of the technology in concrete situations. Furthermore, because of the way technology mediates our ethical judgement, a deeper ethical dilemma arises; we can either judge technologies like AI prematurely, without knowing their impact, or after they have been able to bias our norms and intuitions for ethical deliberation. This article then argues that the vulnerability of our ethical judgement to continuous doubt, which is exposed by AI as a landmark case of disruptive technology, can be addressed by integrating Albert Camus’ philosophy of absurdity, rebellion and dignity. Through Camus, we can contextualize our experience of ethical doubt in relation to AI as existentially absurd, while also navigating normative change more confidently with meta-level principles for ethical deliberation itself. The article concludes that, while ethical dilemmas and doubts will persist, this Camusian approach will make us more responsible in undertaking the continuous adaptation and concretizing of our ethical frameworks.

本文提出了一种新的、Camusian的方法来分析和引导与人工智能(AI)以及其他颠覆性技术相关的伦理困境。本文以Floridi和Cowls(2021)提出的“社会中人工智能五项原则的统一框架”为出发点,该框架为人工智能伦理话语中提出的许多抽象价值观和原则提供了一个全面而有凝聚力的框架。使用案例研究方法,重点关注人工智能中的问责原则,我们证明,即使遵循抽象原则的详尽系统化,每当我们考虑在具体情况下应用该技术时,仍然会出现道德困境。此外,由于技术调解我们伦理判断的方式,更深层次的伦理困境出现了;我们要么在不知道其影响的情况下过早地判断像人工智能这样的技术,要么在它们能够对我们的规范和直觉产生偏见之后进行道德审议。接下来,本文认为,我们的道德判断对持续怀疑的脆弱性,可以通过整合阿尔伯特·加缪的荒谬、反叛和尊严哲学来解决,人工智能作为颠覆性技术的标志性案例暴露了这一点。通过加缪,我们可以将我们与人工智能相关的道德怀疑经验置于存在主义荒谬的背景下,同时也可以更自信地利用伦理审议本身的元层面原则来引导规范变革。文章的结论是,尽管道德困境和疑虑将持续存在,但这种卡缪式的方法将使我们在不断适应和具体化我们的道德框架方面更加负责。
{"title":"An absurdist ethics of AI: applying Camus’ concepts of rebellion and dignity to the challenges posed by disruptive technoscience","authors":"Marlon Valentijn Kruizinga,&nbsp;Hub Zwart,&nbsp;Valerie Frissen","doi":"10.1007/s00146-025-02482-9","DOIUrl":"10.1007/s00146-025-02482-9","url":null,"abstract":"<div><p>This article proposes a new, Camusian approach to analyzing and navigating ethical dilemmas in relation to Artificial Intelligence (AI) and, by extension, to other disruptive technoscience. The article takes as its point of departure the <i>Unified Framework of Five Principles for AI in Society,</i> as advanced by Floridi and Cowls (2021), which offers a comprehensive and cohesive framework of the many abstract values and principles brought up in AI ethics discourse. Using a case-study approach, which focuses on the principle of accountability in AI, we demonstrate that, even following an exhaustive systematization of abstract principles, ethical dilemmas still arise whenever we consider applications of the technology in concrete situations. Furthermore, because of the way technology mediates our ethical judgement, a deeper ethical dilemma arises; we can either judge technologies like AI prematurely, without knowing their impact, or after they have been able to bias our norms and intuitions for ethical deliberation. This article then argues that the vulnerability of our ethical judgement to continuous doubt, which is exposed by AI as a landmark case of disruptive technology, can be addressed by integrating Albert Camus’ philosophy of absurdity, rebellion and dignity. Through Camus, we can contextualize our experience of ethical doubt in relation to AI as existentially absurd, while also navigating normative change more confidently with meta-level principles for ethical deliberation itself. The article concludes that, while ethical dilemmas and doubts will persist, this Camusian approach will make us more responsible in undertaking the continuous adaptation and concretizing of our ethical frameworks.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"185 - 201"},"PeriodicalIF":4.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02482-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can AI have a sense of morality? 人工智能能有道德感吗?
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-14 DOI: 10.1007/s00146-025-02476-7
Donghee Shin
{"title":"Can AI have a sense of morality?","authors":"Donghee Shin","doi":"10.1007/s00146-025-02476-7","DOIUrl":"10.1007/s00146-025-02476-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 6","pages":"4169 - 4170"},"PeriodicalIF":4.7,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Not in our image: rethinking anthropomorphism in expert chatbot design 不符合我们的形象:重新思考专家聊天机器人设计中的拟人化
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-12 DOI: 10.1007/s00146-025-02438-z
Gulnara Z. Karimova

This article interrogates how users interpret and respond to anthropomorphic versus minimalist chatbot designs in legal and regulatory advisory domains, contexts where ambiguity is costly and charm rarely billable. Anchored in ten in-depth interviews and supported by probabilistic simulations employing Bayesian inference and Monte Carlo simulation, the study reveals that interface preferences are far from stylistic whimsy. Instead, they reflect deep-seated expectations rooted in professional roles and interactional demands. Practitioners in law, HR, and compliance consistently gravitate toward pared-down, non-human designs and value transparency, cognitive economy, and semantic precision. In contrast, those operating in branding, UX, or emotionally expressive roles tend to welcome anthropomorphic agents, associating them with engagement and affective resonance. The findings advocate for adaptive chatbot architectures: systems that modulate their aesthetic and communicative cues in response to domain norms, user expectations, and interactional context.

本文探讨了在法律和监管咨询领域,用户如何解释和回应拟人化与极简主义的聊天机器人设计,在这种情况下,模糊性是昂贵的,魅力很少收费。在十次深度访谈的基础上,通过采用贝叶斯推理和蒙特卡罗模拟的概率模拟,该研究表明,界面偏好远非风格上的奇思妙想。相反,它们反映了根植于职业角色和互动需求的根深蒂固的期望。法律、人力资源和法规遵从方面的从业者始终倾向于精简的、非人类的设计和价值透明、认知经济和语义精确。相比之下,那些经营品牌、用户体验或情感表达角色的人倾向于欢迎拟人化代理,将它们与粘性和情感共鸣联系起来。研究结果支持自适应聊天机器人架构:系统根据领域规范、用户期望和交互环境调整其审美和交流线索。
{"title":"Not in our image: rethinking anthropomorphism in expert chatbot design","authors":"Gulnara Z. Karimova","doi":"10.1007/s00146-025-02438-z","DOIUrl":"10.1007/s00146-025-02438-z","url":null,"abstract":"<div><p>This article interrogates how users interpret and respond to anthropomorphic versus minimalist chatbot designs in legal and regulatory advisory domains, contexts where ambiguity is costly and charm rarely billable. Anchored in ten in-depth interviews and supported by probabilistic simulations employing Bayesian inference and Monte Carlo simulation, the study reveals that interface preferences are far from stylistic whimsy. Instead, they reflect deep-seated expectations rooted in professional roles and interactional demands. Practitioners in law, HR, and compliance consistently gravitate toward pared-down, non-human designs and value transparency, cognitive economy, and semantic precision. In contrast, those operating in branding, UX, or emotionally expressive roles tend to welcome anthropomorphic agents, associating them with engagement and affective resonance. The findings advocate for adaptive chatbot architectures: systems that modulate their aesthetic and communicative cues in response to domain norms, user expectations, and interactional context.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"611 - 628"},"PeriodicalIF":4.7,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02438-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lessons from the Roman Empire: ‘bread and circuses’ as a model for democracy in the AGI age 罗马帝国的教训:“面包和马戏”是AGI时代民主的典范
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-12 DOI: 10.1007/s00146-025-02449-w
Yusaku Fujii
{"title":"Lessons from the Roman Empire: ‘bread and circuses’ as a model for democracy in the AGI age","authors":"Yusaku Fujii","doi":"10.1007/s00146-025-02449-w","DOIUrl":"10.1007/s00146-025-02449-w","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"467 - 468"},"PeriodicalIF":4.7,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When machines take over: professional chess as a model case for the societal impact of superhuman AI 当机器接管:职业象棋作为超人人工智能社会影响的典范案例
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-12 DOI: 10.1007/s00146-025-02465-w
Fabian Anicker

Once emblematic of human intellectual mastery, chess has become a domain where machines not only surpass human ability but fundamentally reshape the practice’s social dynamics, meanings, and power structures. This paper examines the transformative impact of superhuman AI on professional chess, positioning it as an analytical model case for understanding AI’s broader societal implications. Drawing on a corpus of 271 h of transcribed chess commentary, the analysis traces the shift from symbolic AI to deep learning systems, and the consequent reconfiguration of chess as a social practice. The study explores how this transformation alters the game’s meaning, redistributes authority, reshapes power relations, and creates new social roles through AI’s integration. These insights foreshadow challenges for fields, such as medicine or law, where AI’s ascendancy may similarly redistribute authority, redefine purpose, and reshape agency.

国际象棋曾经是人类智力掌握的象征,但现在已经成为一个领域,机器不仅超越了人类的能力,而且从根本上重塑了这种实践的社会动态、意义和权力结构。本文研究了超人人工智能对职业象棋的变革性影响,将其定位为理解人工智能更广泛的社会影响的分析模型案例。根据271小时的国际象棋评论转录语料库,该分析追踪了从符号人工智能到深度学习系统的转变,以及由此导致的国际象棋作为一种社会实践的重新配置。该研究探讨了这种转变如何通过人工智能的整合改变游戏的意义,重新分配权力,重塑权力关系,并创造新的社会角色。这些见解预示着医学或法律等领域的挑战,在这些领域,人工智能的优势可能同样会重新分配权力,重新定义目的,重塑机构。
{"title":"When machines take over: professional chess as a model case for the societal impact of superhuman AI","authors":"Fabian Anicker","doi":"10.1007/s00146-025-02465-w","DOIUrl":"10.1007/s00146-025-02465-w","url":null,"abstract":"<div><p>Once emblematic of human intellectual mastery, chess has become a domain where machines not only surpass human ability but fundamentally reshape the practice’s social dynamics, meanings, and power structures. This paper examines the transformative impact of superhuman AI on professional chess, positioning it as an analytical model case for understanding AI’s broader societal implications. Drawing on a corpus of 271 h of transcribed chess commentary, the analysis traces the shift from symbolic AI to deep learning systems, and the consequent reconfiguration of chess as a social practice. The study explores how this transformation alters the game’s meaning, redistributes authority, reshapes power relations, and creates new social roles through AI’s integration. These insights foreshadow challenges for fields, such as medicine or law, where AI’s ascendancy may similarly redistribute authority, redefine purpose, and reshape agency.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"363 - 375"},"PeriodicalIF":4.7,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02465-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When machines decide: the quiet death of judgment at work 当机器决定时:工作中判断力的无声死亡
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-10 DOI: 10.1007/s00146-025-02448-x
Sibichan Joseph
{"title":"When machines decide: the quiet death of judgment at work","authors":"Sibichan Joseph","doi":"10.1007/s00146-025-02448-x","DOIUrl":"10.1007/s00146-025-02448-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"463 - 465"},"PeriodicalIF":4.7,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The institutionalized self: a psychological model of identity formation in AI-governed environments 制度化的自我:人工智能治理环境中身份形成的心理模型
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-09 DOI: 10.1007/s00146-025-02480-x
Ushio Minami

What happens when AI systems begin to shape not only our decisions but our sense of self? This article introduces the concept of the institutionalized self—a psychological formation that emerges through recursive interactions with algorithmic feedback in predictive institutional environments. As artificial intelligence (AI) becomes integrated into systems governing education, employment, and healthcare, individuals are increasingly evaluated and categorized by opaque, anticipatory classification systems. These interactions influence not only external opportunities but also internal self-understanding. Drawing on symbolic interactionism, self-determination theory, and technological mediation, the paper proposes a three-stage model of self-formation: institutional perception, metacognitive response, and self-reconfiguration. The model accounts for how institutional classifications affect narrative identity, autonomy, and perceived efficacy. Testable hypotheses are offered to support empirical research, and ethical implications are explored concerning the design of AI systems that shape personal meaning making. The institutionalized self reframes the role of psychology in an era where identity is no longer solely self-authored, but increasingly co-constructed with predictive institutions.

当人工智能系统不仅开始影响我们的决策,还开始影响我们的自我意识时,会发生什么?本文介绍了制度化自我的概念,这是一种在预测性制度环境中通过与算法反馈的递归相互作用而出现的心理形成。随着人工智能(AI)被整合到管理教育、就业和医疗保健的系统中,个人越来越多地被不透明的、前瞻性的分类系统评估和分类。这些互动不仅影响外部机会,也影响内部的自我理解。利用符号互动主义、自我决定理论和技术中介,本文提出了一个自我形成的三阶段模型:制度感知、元认知反应和自我重构。该模型解释了制度分类如何影响叙事认同、自主性和感知效能。本文提供了可测试的假设来支持实证研究,并探讨了人工智能系统设计对个人意义创造的伦理影响。在一个身份不再仅仅是自我创造的时代,制度化的自我重新定义了心理学的角色,而是越来越多地与预测性机构共同构建。
{"title":"The institutionalized self: a psychological model of identity formation in AI-governed environments","authors":"Ushio Minami","doi":"10.1007/s00146-025-02480-x","DOIUrl":"10.1007/s00146-025-02480-x","url":null,"abstract":"<div><p>What happens when AI systems begin to shape not only our decisions but our sense of self? This article introduces the concept of <i>the institutionalized self</i>—a psychological formation that emerges through recursive interactions with algorithmic feedback in predictive institutional environments. As artificial intelligence (AI) becomes integrated into systems governing education, employment, and healthcare, individuals are increasingly evaluated and categorized by opaque, anticipatory classification systems. These interactions influence not only external opportunities but also internal self-understanding. Drawing on symbolic interactionism, self-determination theory, and technological mediation, the paper proposes a three-stage model of self-formation: institutional perception, metacognitive response, and self-reconfiguration. The model accounts for how institutional classifications affect narrative identity, autonomy, and perceived efficacy. Testable hypotheses are offered to support empirical research, and ethical implications are explored concerning the design of AI systems that shape personal meaning making. The institutionalized self reframes the role of psychology in an era where identity is no longer solely self-authored, but increasingly co-constructed with predictive institutions.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"165 - 169"},"PeriodicalIF":4.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rise of the algopticon: the algoptic gaze in the age of algorithmic governance and surveillance capitalism algopticon的兴起:算法治理和监控资本主义时代的algopticon凝视
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-09 DOI: 10.1007/s00146-025-02473-w
Trent Bax

This paper introduces the concept of an algopticon to theorize surveillance in the age of algorithmic governance and surveillance capitalism. Building on earlier surveillance models— panopticon, synopticon, banopticon, and super-panopticon—the algopticon represents a qualitative transformation in how power operates through data, prediction, and automation. Drawing on Hegel’s concept of sublation (Aufhebung), the paper argues that the algopticon does not simply replace these earlier frameworks but sublates them: it negates, preserves, and elevates their core logics into a new surveillance regime. Visibility becomes invisibility; discipline becomes prediction; and observation becomes algorithmic categorization. By automating control and embedding it into everyday life, the algopticon alters subjectivity, restructures agency, and deepens asymmetries of power. Through comparative analysis, this paper shows how the algopticon consolidates disciplinary, synoptic, exclusionary, and informational logics into a pervasive system of behavioral governance. It concludes by emphasizing the ethical and political stakes of this shift and calls for algorithmic accountability, transparency, and democratic oversight to ensure that emerging technologies serve justice rather than entrench inequality.

本文介绍了algopticon的概念,以理论化算法治理和监视资本主义时代的监视。基于早期的监视模型——panopticon、synopticon、banopticon和超级panopticon——algopticon代表了权力如何通过数据、预测和自动化运作的质的转变。借鉴黑格尔的扬弃概念(Aufhebung),本文认为,algopticon并不是简单地取代这些早期的框架,而是扬弃它们:它否定、保留并将其核心逻辑提升为一种新的监视制度。可见变成不可见;纪律变成了预测;观察变成了算法分类。通过自动化控制并将其嵌入日常生活,algopticon改变了主体性,重构了代理,加深了权力的不对称。通过比较分析,本文展示了algopticon如何将学科逻辑、概要逻辑、排他性逻辑和信息逻辑整合成一个无处不在的行为治理系统。报告最后强调了这一转变的道德和政治风险,并呼吁对算法问责制、透明度和民主监督,以确保新兴技术服务于正义,而不是加剧不平等。
{"title":"Rise of the algopticon: the algoptic gaze in the age of algorithmic governance and surveillance capitalism","authors":"Trent Bax","doi":"10.1007/s00146-025-02473-w","DOIUrl":"10.1007/s00146-025-02473-w","url":null,"abstract":"<div><p>This paper introduces the concept of an <i>algopticon</i> to theorize surveillance in the age of algorithmic governance and surveillance capitalism. Building on earlier surveillance models— panopticon, synopticon, banopticon, and super-panopticon—the algopticon represents a qualitative transformation in how power operates through data, prediction, and automation. Drawing on Hegel’s concept of sublation (Aufhebung), the paper argues that the algopticon does not simply replace these earlier frameworks but sublates them: it negates, preserves, and elevates their core logics into a new surveillance regime. Visibility becomes invisibility; discipline becomes prediction; and observation becomes algorithmic categorization. By automating control and embedding it into everyday life, the algopticon alters subjectivity, restructures agency, and deepens asymmetries of power. Through comparative analysis, this paper shows how the algopticon consolidates disciplinary, synoptic, exclusionary, and informational logics into a pervasive system of behavioral governance. It concludes by emphasizing the ethical and political stakes of this shift and calls for algorithmic accountability, transparency, and democratic oversight to ensure that emerging technologies serve justice rather than entrench inequality.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"415 - 427"},"PeriodicalIF":4.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Hoffmann’s The Sandman through the lens of AI: human being and robot 从人工智能的视角重新思考霍夫曼的《睡魔》:人与机器人
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-09 DOI: 10.1007/s00146-025-02443-2
Chiinngaihkim Guite

E.T.A. Hoffmann’s The Sandman (1816) mixed themes such as fear, eeriness and the darkening of the relationship between humans and automatons. The Sandman shows the possible emergence of human machines or so-called artificial intelligence (AI), which is indistinguishable from humans. The paper also looks at emerging and growing technologies, the ethics of artificial life and the nature of consciousness. The technical ability to create automata, developed at the end of the eighteenth century, fascinated nineteenth century authors such as Hoffmann. In The Sandman, Olimpia embodies the nineteenth century fascination with machines and mechanical beings that can imitate and interact with humans. Olimpia is the epitome of the modern concept of AI, in which machines simulate and control human behavior in various capacities. This paper is about how The Sandman anticipates the human ability to understand the nature of reality and technology. With the advent of mass media and the development of AI, I show how the protagonist Nathaniel complicates the conceptualization of the human subject. By relying on and compromising with technology, Nathaniel loses his humanity and turns himself into a madman. The paper also highlights the text and compares it to contemporary AI narratives. Fear and anxiety, reflections on the loss of humanity in the face of technological advancement and the potential of AI to surpass human intelligence are also discussed.

E.T.A.霍夫曼(E.T.A. Hoffmann)的《睡魔》(The Sandman, 1816)混合了恐惧、怪异和人类与机器人之间关系的黑暗等主题。睡魔展示了可能出现的人类机器或所谓的人工智能(AI),它与人类无法区分。这篇论文还探讨了新兴和发展中的技术、人工生命的伦理和意识的本质。十八世纪末发展起来的创造自动机的技术能力,使霍夫曼等十九世纪的作家着迷。在《睡魔》中,奥林匹亚体现了19世纪人们对机器和可以模仿人类并与人类互动的机械生物的迷恋。奥林匹亚是现代人工智能概念的缩影,在人工智能中,机器以各种能力模拟和控制人类的行为。这篇论文是关于睡魔如何预测人类理解现实和技术本质的能力。随着大众传媒的出现和人工智能的发展,我展示了主人公纳撒尼尔如何使人类主体的概念复杂化。由于对科技的依赖和妥协,纳撒尼尔失去了人性,变成了一个疯子。论文还强调了文本,并将其与当代人工智能叙事进行了比较。书中还讨论了恐惧和焦虑、面对技术进步对人性丧失的反思以及人工智能超越人类智能的潜力。
{"title":"Rethinking Hoffmann’s The Sandman through the lens of AI: human being and robot","authors":"Chiinngaihkim Guite","doi":"10.1007/s00146-025-02443-2","DOIUrl":"10.1007/s00146-025-02443-2","url":null,"abstract":"<div><p>E.T.A. Hoffmann’s <i>The Sandman</i> (1816) mixed themes such as fear, eeriness and the darkening of the relationship between humans and automatons. <i>The Sandman</i> shows the possible emergence of human machines or so-called artificial intelligence (AI), which is indistinguishable from humans. The paper also looks at emerging and growing technologies, the ethics of artificial life and the nature of consciousness. The technical ability to create automata, developed at the end of the eighteenth century, fascinated nineteenth century authors such as Hoffmann. In <i>The Sandman</i>, Olimpia embodies the nineteenth century fascination with machines and mechanical beings that can imitate and interact with humans. Olimpia is the epitome of the modern concept of AI, in which machines simulate and control human behavior in various capacities. This paper is about how <i>The Sandman</i> anticipates the human ability to understand the nature of reality and technology. With the advent of mass media and the development of AI, I show how the protagonist Nathaniel complicates the conceptualization of the human subject. By relying on and compromising with technology, Nathaniel loses his humanity and turns himself into a madman. The paper also highlights the text and compares it to contemporary AI narratives. Fear and anxiety, reflections on the loss of humanity in the face of technological advancement and the potential of AI to surpass human intelligence are also discussed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"111 - 120"},"PeriodicalIF":4.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Educational pathways for enhancing algorithmic transparency: a discussion based on the phenomenological reduction method 提高算法透明度的教育途径:基于现象学还原法的讨论
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-08 DOI: 10.1007/s00146-025-02475-8
Chen Yang, Xiaoran LU

Enhancing algorithmic transparency is a pivotal issue in the ethics of artificial intelligence. From a phenomenological perspective, algorithmic transparency comprises two dimensions: the openness and visibility of algorithmic systems and the subject’s cognitive engagement with understanding the operational logic and theoretical underpinnings of algorithms. Consequently, algorithmic transparency should be reframed as a cognitive issue, with education serving as the central pathway for enhancing cognitive capacity. Unlike traditional educational paradigms that prioritize knowledge transmission, phenomenological pedagogy emphasizes intuitive experience and cultivation of critical thinking, enabling learners to achieve deeper understanding and foster the construction of knowledge. This approach not only underscores the practical value of phenomenological pedagogy in advancing algorithmic transparency but also highlights that such transparency depends not merely on public disclosure of algorithmic information but on learners’ experiential intuition and reflective analysis. Compared to purely technical solutions, improving transparency through the enhancement of learners’ cognitive comprehension of algorithms offers distinct and transformative advantages.

提高算法的透明度是人工智能伦理中的一个关键问题。从现象学的角度来看,算法透明度包括两个维度:算法系统的开放性和可见性,以及主体对算法的操作逻辑和理论基础的认知参与。因此,算法透明度应该被重新定义为一个认知问题,教育是提高认知能力的核心途径。与传统教育范式以知识传递为重点不同,现象学教学法强调直观体验和批判性思维的培养,使学习者获得更深层次的理解,促进知识的建构。这种方法不仅强调了现象学教学法在提高算法透明度方面的实用价值,而且强调了这种透明度不仅取决于算法信息的公开披露,还取决于学习者的经验直觉和反思性分析。与纯粹的技术解决方案相比,通过增强学习者对算法的认知理解来提高透明度具有明显的变革性优势。
{"title":"Educational pathways for enhancing algorithmic transparency: a discussion based on the phenomenological reduction method","authors":"Chen Yang,&nbsp;Xiaoran LU","doi":"10.1007/s00146-025-02475-8","DOIUrl":"10.1007/s00146-025-02475-8","url":null,"abstract":"<div><p>Enhancing algorithmic transparency is a pivotal issue in the ethics of artificial intelligence. From a phenomenological perspective, algorithmic transparency comprises two dimensions: the openness and visibility of algorithmic systems and the subject’s cognitive engagement with understanding the operational logic and theoretical underpinnings of algorithms. Consequently, algorithmic transparency should be reframed as a cognitive issue, with education serving as the central pathway for enhancing cognitive capacity. Unlike traditional educational paradigms that prioritize knowledge transmission, phenomenological pedagogy emphasizes intuitive experience and cultivation of critical thinking, enabling learners to achieve deeper understanding and foster the construction of knowledge. This approach not only underscores the practical value of phenomenological pedagogy in advancing algorithmic transparency but also highlights that such transparency depends not merely on public disclosure of algorithmic information but on learners’ experiential intuition and reflective analysis. Compared to purely technical solutions, improving transparency through the enhancement of learners’ cognitive comprehension of algorithms offers distinct and transformative advantages.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"151 - 164"},"PeriodicalIF":4.7,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02475-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146099199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1