集中与分散智能系统中的对抗动力学。

IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Topics in Cognitive Science Pub Date : 2023-10-30 DOI:10.1111/tops.12705
Levin Brinkmann, Manuel Cebrian, Niccolò Pescetelli
{"title":"集中与分散智能系统中的对抗动力学。","authors":"Levin Brinkmann,&nbsp;Manuel Cebrian,&nbsp;Niccolò Pescetelli","doi":"10.1111/tops.12705","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Dynamics in Centralized Versus Decentralized Intelligent Systems.\",\"authors\":\"Levin Brinkmann,&nbsp;Manuel Cebrian,&nbsp;Niccolò Pescetelli\",\"doi\":\"10.1111/tops.12705\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.</p>\",\"PeriodicalId\":47822,\"journal\":{\"name\":\"Topics in Cognitive Science\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2023-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Topics in Cognitive Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1111/tops.12705\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Topics in Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/tops.12705","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)经常被用来预测人类行为,从而可能对个人和集体的行动自由构成限制。人工智能最具争议和争议的应用范围从定向广告到预防犯罪,包括镇压内乱。学者和民间社会监督机构正在讨论人工智能被政府或私营企业等中央机构使用的压迫性危险。一些人认为,与公民相比,人工智能赋予政府的权力是不对称的。另一方面,民间抗议活动往往依赖于分散的活动家网络,而没有集中的领导或计划。民间抗议活动在集中式和去中心化智能之间造成了对抗性的紧张关系,从而引发了分布式人类网络如何集体适应并超越试图预测和控制其活动的敌对集中式人工智能的问题。本文利用多智能体强化学习来模拟人机混合社会中的动力学。我们询问去中心化智能体在与集中式预测算法竞争时如何集体适应,其中预测涉及抑制协调。特别是,我们研究了一组个体学习者和一个中央预测算法之间的对抗性游戏,每个算法都是通过深度Q学习训练的。我们比较了不同的预测架构,并展示了这种动态的对抗性推动每个智能增加其行为复杂性以超越其对应智能的条件。我们进一步证明了共享预测算法驱动去中心化代理调整其行为。这项工作揭示了人工智能带来的极权主义危险,并提供了证据,证明分散组织的人类可以通过开发越来越复杂的协调策略来克服其风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adversarial Dynamics in Centralized Versus Decentralized Intelligent Systems.

Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Topics in Cognitive Science
Topics in Cognitive Science PSYCHOLOGY, EXPERIMENTAL-
CiteScore
8.50
自引率
10.00%
发文量
52
期刊介绍: Topics in Cognitive Science (topiCS) is an innovative new journal that covers all areas of cognitive science including cognitive modeling, cognitive neuroscience, cognitive anthropology, and cognitive science and philosophy. topiCS aims to provide a forum for: -New communities of researchers- New controversies in established areas- Debates and commentaries- Reflections and integration The publication features multiple scholarly papers dedicated to a single topic. Some of these topics will appear together in one issue, but others may appear across several issues or develop into a regular feature. Controversies or debates started in one issue may be followed up by commentaries in a later issue, etc. However, the format and origin of the topics will vary greatly.
期刊最新文献
Metaphors and the Invention of Writing. Language Production and Prediction in a Parallel Activation Model. Homesign Research, Gesture Studies, and Sign Language Linguistics: The Bigger Picture of Homesign and Homesigners. Simultaneous Hypotheses in Cognitive Agents: Commentary on Paxton, Necaise et al., and the Dynamical Hypothesis in Cognitive Science. Measuring Beyond the Standard: Informal Measurement Systems as Cognitive Technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1