We are Building Gods: AI as the Anthropomorphised Authority of the Past

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Minds and Machines Pub Date : 2024-04-25 DOI:10.1007/s11023-024-09667-z
Carl Öhman
{"title":"We are Building Gods: AI as the Anthropomorphised Authority of the Past","authors":"Carl Öhman","doi":"10.1007/s11023-024-09667-z","DOIUrl":null,"url":null,"abstract":"<p>This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of vast volumes of data, literally traces of past human (speech) acts, synthesized into a single agency that is (falsely) experienced by users as extra-human. This reconceptualization, I argue, opens up new avenues of critique of LLMs by allowing the mobilization of theoretical resources from centuries of religious critique. For illustration, I draw on the Marxian religious philosophy of Martin Hägglund. From this perspective, the danger of LLMs emerge not only as bias or unpredictability, but as a temptation to abdicate our spiritual and ultimately democratic freedom in favor of what I call a <i>tyranny of the past</i>.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minds and Machines","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11023-024-09667-z","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of vast volumes of data, literally traces of past human (speech) acts, synthesized into a single agency that is (falsely) experienced by users as extra-human. This reconceptualization, I argue, opens up new avenues of critique of LLMs by allowing the mobilization of theoretical resources from centuries of religious critique. For illustration, I draw on the Marxian religious philosophy of Martin Hägglund. From this perspective, the danger of LLMs emerge not only as bias or unpredictability, but as a temptation to abdicate our spiritual and ultimately democratic freedom in favor of what I call a tyranny of the past.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
我们在造神:人工智能是过去的拟人化权威
本文认为,大型语言模型(LLM)应被解释为一种神。在神学意义上,神是超越时空的不朽存在。这显然与 LLM 完全不同。然而,从人类学的意义上讲,神则被定义为一个群体穿越时空的人格化权威--一种将祖先集体塑造成统一的代理人或声音的概念工具。这正是 LLM 的本质。它们是海量数据的产物,是过去人类(语言)行为的字面痕迹,被合成为一个单一的机构,用户(错误地)将其体验为超人类的。我认为,这种重新概念化可以调动几个世纪以来宗教批判的理论资源,为批判 LLMs 开辟新的途径。例如,我借鉴了马丁-海格伦德(Martin Hägglund)的马克思宗教哲学。从这个角度来看,LLMs 的危险不仅表现为偏见或不可预测性,还表现为放弃我们的精神自由和最终的民主自由,转而支持我所说的过去的暴政的诱惑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Minds and Machines
Minds and Machines 工程技术-计算机:人工智能
CiteScore
12.60
自引率
2.70%
发文量
30
审稿时长
>12 weeks
期刊介绍: Minds and Machines, affiliated with the Society for Machines and Mentality, serves as a platform for fostering critical dialogue between the AI and philosophical communities. With a focus on problems of shared interest, the journal actively encourages discussions on the philosophical aspects of computer science. Offering a global forum, Minds and Machines provides a space to debate and explore important and contentious issues within its editorial focus. The journal presents special editions dedicated to specific topics, invites critical responses to previously published works, and features review essays addressing current problem scenarios. By facilitating a diverse range of perspectives, Minds and Machines encourages a reevaluation of the status quo and the development of new insights. Through this collaborative approach, the journal aims to bridge the gap between AI and philosophy, fostering a tradition of critique and ensuring these fields remain connected and relevant.
期刊最新文献
Mapping the Ethics of Generative AI: A Comprehensive Scoping Review A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement Artificial Intelligence for the Internal Democracy of Political Parties A Causal Analysis of Harm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1