识别和聚类 PvP 游戏中团队组合的对抗关系,实现高效的平衡分析

Chiu-Chou Lin, Yu-Wei Shih, Kuei-Ting Kuo, Yu-Cheng Chen, Chien-Hua Chen, Wei-Chen Chiu, I-Chen Wu
{"title":"识别和聚类 PvP 游戏中团队组合的对抗关系,实现高效的平衡分析","authors":"Chiu-Chou Lin, Yu-Wei Shih, Kuei-Ting Kuo, Yu-Cheng Chen, Chien-Hua Chen, Wei-Chen Chiu, I-Chen Wu","doi":"arxiv-2408.17180","DOIUrl":null,"url":null,"abstract":"How can balance be quantified in game settings? This question is crucial for\ngame designers, especially in player-versus-player (PvP) games, where analyzing\nthe strength relations among predefined team compositions-such as hero\ncombinations in multiplayer online battle arena (MOBA) games or decks in card\ngames-is essential for enhancing gameplay and achieving balance. We have\ndeveloped two advanced measures that extend beyond the simplistic win rate to\nquantify balance in zero-sum competitive scenarios. These measures are derived\nfrom win value estimations, which employ strength rating approximations via the\nBradley-Terry model and counter relationship approximations via vector\nquantization, significantly reducing the computational complexity associated\nwith traditional win value estimations. Throughout the learning process of\nthese models, we identify useful categories of compositions and pinpoint their\ncounter relationships, aligning with the experiences of human players without\nrequiring specific game knowledge. Our methodology hinges on a simple technique\nto enhance codebook utilization in discrete representation with a deterministic\nvector quantization process for an extremely small state space. Our framework\nhas been validated in popular online games, including Age of Empires II,\nHearthstone, Brawl Stars, and League of Legends. The accuracy of the observed\nstrength relations in these games is comparable to traditional pairwise win\nvalue predictions, while also offering a more manageable complexity for\nanalysis. Ultimately, our findings contribute to a deeper understanding of PvP\ngame dynamics and present a methodology that significantly improves game\nbalance evaluation and design.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Identifying and Clustering Counter Relationships of Team Compositions in PvP Games for Efficient Balance Analysis\",\"authors\":\"Chiu-Chou Lin, Yu-Wei Shih, Kuei-Ting Kuo, Yu-Cheng Chen, Chien-Hua Chen, Wei-Chen Chiu, I-Chen Wu\",\"doi\":\"arxiv-2408.17180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"How can balance be quantified in game settings? This question is crucial for\\ngame designers, especially in player-versus-player (PvP) games, where analyzing\\nthe strength relations among predefined team compositions-such as hero\\ncombinations in multiplayer online battle arena (MOBA) games or decks in card\\ngames-is essential for enhancing gameplay and achieving balance. We have\\ndeveloped two advanced measures that extend beyond the simplistic win rate to\\nquantify balance in zero-sum competitive scenarios. These measures are derived\\nfrom win value estimations, which employ strength rating approximations via the\\nBradley-Terry model and counter relationship approximations via vector\\nquantization, significantly reducing the computational complexity associated\\nwith traditional win value estimations. Throughout the learning process of\\nthese models, we identify useful categories of compositions and pinpoint their\\ncounter relationships, aligning with the experiences of human players without\\nrequiring specific game knowledge. Our methodology hinges on a simple technique\\nto enhance codebook utilization in discrete representation with a deterministic\\nvector quantization process for an extremely small state space. Our framework\\nhas been validated in popular online games, including Age of Empires II,\\nHearthstone, Brawl Stars, and League of Legends. The accuracy of the observed\\nstrength relations in these games is comparable to traditional pairwise win\\nvalue predictions, while also offering a more manageable complexity for\\nanalysis. Ultimately, our findings contribute to a deeper understanding of PvP\\ngame dynamics and present a methodology that significantly improves game\\nbalance evaluation and design.\",\"PeriodicalId\":501315,\"journal\":{\"name\":\"arXiv - CS - Multiagent Systems\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multiagent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.17180\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.17180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

如何在游戏设置中量化平衡?这个问题对于游戏设计者来说至关重要,尤其是在玩家对玩家(PvP)游戏中,分析预定团队组合(如多人在线竞技场(MOBA)游戏中的英雄组合或卡牌游戏中的卡组)之间的实力关系对于增强游戏性和实现平衡至关重要。我们开发了两种先进的测量方法,它们超越了简单的胜率,可以量化零和竞技场景中的平衡性。这些测量方法源于胜值估算,通过布拉德利-特里模型(Bradley-Terry model)使用强度等级近似值,通过向量量化使用对抗关系近似值,大大降低了与传统胜值估算相关的计算复杂度。在这些模型的整个学习过程中,我们会识别出有用的组合类别,并精确定位它们之间的对抗关系,从而与人类玩家的经验保持一致,而无需特定的游戏知识。我们的方法依赖于一种简单的技术,即在离散表示法中使用确定性向量量化过程来提高代码集的利用率,从而获得极小的状态空间。我们的框架已经在《帝国时代 II》、《炉石传说》、《乱斗星际》和《英雄联盟》等热门网络游戏中得到了验证。在这些游戏中观察到的强度关系的准确性与传统的成对胜值预测相当,同时还提供了更易于管理的分析复杂性。最终,我们的研究结果有助于加深对 PvP 游戏动态的理解,并提出了一种能显著改善游戏平衡评估和设计的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Identifying and Clustering Counter Relationships of Team Compositions in PvP Games for Efficient Balance Analysis
How can balance be quantified in game settings? This question is crucial for game designers, especially in player-versus-player (PvP) games, where analyzing the strength relations among predefined team compositions-such as hero combinations in multiplayer online battle arena (MOBA) games or decks in card games-is essential for enhancing gameplay and achieving balance. We have developed two advanced measures that extend beyond the simplistic win rate to quantify balance in zero-sum competitive scenarios. These measures are derived from win value estimations, which employ strength rating approximations via the Bradley-Terry model and counter relationship approximations via vector quantization, significantly reducing the computational complexity associated with traditional win value estimations. Throughout the learning process of these models, we identify useful categories of compositions and pinpoint their counter relationships, aligning with the experiences of human players without requiring specific game knowledge. Our methodology hinges on a simple technique to enhance codebook utilization in discrete representation with a deterministic vector quantization process for an extremely small state space. Our framework has been validated in popular online games, including Age of Empires II, Hearthstone, Brawl Stars, and League of Legends. The accuracy of the observed strength relations in these games is comparable to traditional pairwise win value predictions, while also offering a more manageable complexity for analysis. Ultimately, our findings contribute to a deeper understanding of PvP game dynamics and present a methodology that significantly improves game balance evaluation and design.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Putting Data at the Centre of Offline Multi-Agent Reinforcement Learning HARP: Human-Assisted Regrouping with Permutation Invariant Critic for Multi-Agent Reinforcement Learning On-policy Actor-Critic Reinforcement Learning for Multi-UAV Exploration CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark Multi-agent Path Finding in Continuous Environment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1