Evolution of Social Norms in LLM Agents using Natural Language

Ilya Horiguchi, Takahide Yoshida, Takashi Ikegami
{"title":"Evolution of Social Norms in LLM Agents using Natural Language","authors":"Ilya Horiguchi, Takahide Yoshida, Takashi Ikegami","doi":"arxiv-2409.00993","DOIUrl":null,"url":null,"abstract":"Recent advancements in Large Language Models (LLMs) have spurred a surge of\ninterest in leveraging these models for game-theoretical simulations, where\nLLMs act as individual agents engaging in social interactions. This study\nexplores the potential for LLM agents to spontaneously generate and adhere to\nnormative strategies through natural language discourse, building upon the\nfoundational work of Axelrod's metanorm games. Our experiments demonstrate that\nthrough dialogue, LLM agents can form complex social norms, such as\nmetanorms-norms enforcing the punishment of those who do not punish\ncheating-purely through natural language interaction. The results affirm the\neffectiveness of using LLM agents for simulating social interactions and\nunderstanding the emergence and evolution of complex strategies and norms\nthrough natural language. Future work may extend these findings by\nincorporating a wider range of scenarios and agent characteristics, aiming to\nuncover more nuanced mechanisms behind social norm formation.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in Large Language Models (LLMs) have spurred a surge of interest in leveraging these models for game-theoretical simulations, where LLMs act as individual agents engaging in social interactions. This study explores the potential for LLM agents to spontaneously generate and adhere to normative strategies through natural language discourse, building upon the foundational work of Axelrod's metanorm games. Our experiments demonstrate that through dialogue, LLM agents can form complex social norms, such as metanorms-norms enforcing the punishment of those who do not punish cheating-purely through natural language interaction. The results affirm the effectiveness of using LLM agents for simulating social interactions and understanding the emergence and evolution of complex strategies and norms through natural language. Future work may extend these findings by incorporating a wider range of scenarios and agent characteristics, aiming to uncover more nuanced mechanisms behind social norm formation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用自然语言演化 LLM 代理中的社会规范
大语言模型(LLM)的最新进展激发了人们对利用这些模型进行博弈论模拟的浓厚兴趣,在博弈论模拟中,大语言模型作为个体代理参与社会互动。本研究以阿克塞尔罗德(Axelrod)的元规范游戏(metanorm games)为基础,探索了 LLM 代理通过自然语言对话自发生成并遵守规范策略的潜力。我们的实验证明,通过对话,LLM代理可以形成复杂的社会规范,如元规范--强制惩罚那些不惩罚偷吃者的规范--纯粹是通过自然语言交互实现的。这些结果肯定了使用 LLM 代理模拟社会互动以及通过自然语言理解复杂策略和规范的出现和演化的有效性。未来的工作可能会通过纳入更广泛的情景和代理特征来扩展这些发现,旨在探索社会规范形成背后更细微的机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Putting Data at the Centre of Offline Multi-Agent Reinforcement Learning HARP: Human-Assisted Regrouping with Permutation Invariant Critic for Multi-Agent Reinforcement Learning On-policy Actor-Critic Reinforcement Learning for Multi-UAV Exploration CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark Multi-agent Path Finding in Continuous Environment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1