{"title":"CoSD: Balancing behavioral consistency and diversity in unsupervised skill discovery.","authors":"Shuai Qing, Yi Sun, Kun Ding, Hui Zhang, Fei Zhu","doi":"10.1016/j.neunet.2024.106889","DOIUrl":null,"url":null,"abstract":"<p><p>In hierarchical reinforcement learning, unsupervised skill discovery holds promise for overcoming the challenge of sparse rewards commonly encountered in traditional reinforcement learning. Although previous unsupervised skill discovery methods excelled at maximizing intrinsic rewards, they often overly prioritized skill diversity. Unrestrained pursuit of diversity leads skills to concentrate attention on unexplored domains, overlooking the internal consistency of skills themselves, resulting in the state visit distribution of individual skills lacking concentration. To address this problem, the Constrained Skill Discovery (CoSD) algorithm is proposed to balance the diversity and behavioral consistency of skills. CoSD integrates both the forward and the reverse decomposition forms of mutual information and uses the maximum entropy policy to maximize the information-theoretic objective of skill learning while requiring that each skill maintain low state entropy internally, which enhances the behavioral consistency of the skills while pursuing the diversity of the skills and ensures that the learned skills have a high degree of stability. Experimental results demonstrated that, compared with other skill discovery methods based on mutual information, skills from CoSD exhibited a more concentrated state visit distribution, indicating higher behavioral consistency and stability. In some complex downstream tasks, the skills with higher behavioral consistency exhibit superior performance.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"106889"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106889","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In hierarchical reinforcement learning, unsupervised skill discovery holds promise for overcoming the challenge of sparse rewards commonly encountered in traditional reinforcement learning. Although previous unsupervised skill discovery methods excelled at maximizing intrinsic rewards, they often overly prioritized skill diversity. Unrestrained pursuit of diversity leads skills to concentrate attention on unexplored domains, overlooking the internal consistency of skills themselves, resulting in the state visit distribution of individual skills lacking concentration. To address this problem, the Constrained Skill Discovery (CoSD) algorithm is proposed to balance the diversity and behavioral consistency of skills. CoSD integrates both the forward and the reverse decomposition forms of mutual information and uses the maximum entropy policy to maximize the information-theoretic objective of skill learning while requiring that each skill maintain low state entropy internally, which enhances the behavioral consistency of the skills while pursuing the diversity of the skills and ensures that the learned skills have a high degree of stability. Experimental results demonstrated that, compared with other skill discovery methods based on mutual information, skills from CoSD exhibited a more concentrated state visit distribution, indicating higher behavioral consistency and stability. In some complex downstream tasks, the skills with higher behavioral consistency exhibit superior performance.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.