{"title":"Dynamic preference inference network: Improving sample efficiency for multi-objective reinforcement learning by preference estimation","authors":"","doi":"10.1016/j.knosys.2024.112512","DOIUrl":null,"url":null,"abstract":"<div><p>Multi-objective reinforcement learning (MORL) addresses the challenge of optimizing policies in environments with multiple conflicting objectives. Traditional approaches often rely on scalar utility functions, which require predefined preference weights, limiting their adaptability and efficiency. To overcome this, we propose the Dynamic Preference Inference Network (DPIN), a novel method designed to enhance sample efficiency by dynamically estimating the trajectory decision preference of the agent. DPIN leverages a neural network to predict the most favorable preference distribution for each trajectory, enabling more effective policy updates and improving overall performance in complex MORL tasks. Extensive experiments in various benchmark environments demonstrate that DPIN significantly outperforms existing state-of-the-art methods, achieving higher scalarized returns and hypervolume. Our findings highlight DPIN’s ability to adapt to varying preferences, reduce sample complexity, and provide robust solutions in multi-objective settings.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124011468","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-objective reinforcement learning (MORL) addresses the challenge of optimizing policies in environments with multiple conflicting objectives. Traditional approaches often rely on scalar utility functions, which require predefined preference weights, limiting their adaptability and efficiency. To overcome this, we propose the Dynamic Preference Inference Network (DPIN), a novel method designed to enhance sample efficiency by dynamically estimating the trajectory decision preference of the agent. DPIN leverages a neural network to predict the most favorable preference distribution for each trajectory, enabling more effective policy updates and improving overall performance in complex MORL tasks. Extensive experiments in various benchmark environments demonstrate that DPIN significantly outperforms existing state-of-the-art methods, achieving higher scalarized returns and hypervolume. Our findings highlight DPIN’s ability to adapt to varying preferences, reduce sample complexity, and provide robust solutions in multi-objective settings.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.