Levin Brinkmann, Manuel Cebrian, Niccolò Pescetelli
{"title":"Adversarial Dynamics in Centralized Versus Decentralized Intelligent Systems.","authors":"Levin Brinkmann, Manuel Cebrian, Niccolò Pescetelli","doi":"10.1111/tops.12705","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Topics in Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/tops.12705","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.
期刊介绍:
Topics in Cognitive Science (topiCS) is an innovative new journal that covers all areas of cognitive science including cognitive modeling, cognitive neuroscience, cognitive anthropology, and cognitive science and philosophy. topiCS aims to provide a forum for: -New communities of researchers- New controversies in established areas- Debates and commentaries- Reflections and integration The publication features multiple scholarly papers dedicated to a single topic. Some of these topics will appear together in one issue, but others may appear across several issues or develop into a regular feature. Controversies or debates started in one issue may be followed up by commentaries in a later issue, etc. However, the format and origin of the topics will vary greatly.