Learning state-action correspondence across reinforcement learning control tasks via partially paired trajectories

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Applied Intelligence Pub Date : 2024-12-24 DOI:10.1007/s10489-024-06190-7
Javier García, Iñaki Rañó, J. Miguel Burés, Xosé R. Fdez-Vidal, Roberto Iglesias
{"title":"Learning state-action correspondence across reinforcement learning control tasks via partially paired trajectories","authors":"Javier García,&nbsp;Iñaki Rañó,&nbsp;J. Miguel Burés,&nbsp;Xosé R. Fdez-Vidal,&nbsp;Roberto Iglesias","doi":"10.1007/s10489-024-06190-7","DOIUrl":null,"url":null,"abstract":"<div><p>In many reinforcement learning (RL) tasks, the state-action space may be subject to changes over time (e.g., increased number of observable features, changes of representation of actions). Given these changes, the previously learnt policy will likely fail due to the mismatch of input and output features, and another policy must be trained from scratch, which is inefficient in terms of <i>sample complexity</i>. Recent works in transfer learning have succeeded in making RL algorithms more efficient by incorporating knowledge from previous tasks, thus partially alleviating this problem. However, such methods typically must provide an explicit state-action correspondence of one task into the other. An autonomous agent may not have access to such high-level information, but should be able to analyze its experience to identify similarities between tasks. In this paper, we propose a novel method for automatically learning a correspondence of states and actions from one task to another through an agent’s experience. In contrast to previous approaches, our method is based on two key insights: i) only the first state of the trajectories of the two tasks is <i>paired</i>, while the rest are <i>unpaired</i> and randomly collected, and ii) the transition model of the source task is used to predict the dynamics of the target task, thus aligning the <i>unpaired</i> states and actions. Additionally, this paper intentionally decouples the learning of the state-action corresponce from the transfer technique used, making it easy to combine with any transfer method. Our experiments demonstrate that our approach significantly accelerates transfer learning across a diverse set of problems, varying in state/action representation, physics parameters, and morphology, when compared to state-of-the-art algorithms that rely on cycle-consistency.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 3","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-06190-7","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In many reinforcement learning (RL) tasks, the state-action space may be subject to changes over time (e.g., increased number of observable features, changes of representation of actions). Given these changes, the previously learnt policy will likely fail due to the mismatch of input and output features, and another policy must be trained from scratch, which is inefficient in terms of sample complexity. Recent works in transfer learning have succeeded in making RL algorithms more efficient by incorporating knowledge from previous tasks, thus partially alleviating this problem. However, such methods typically must provide an explicit state-action correspondence of one task into the other. An autonomous agent may not have access to such high-level information, but should be able to analyze its experience to identify similarities between tasks. In this paper, we propose a novel method for automatically learning a correspondence of states and actions from one task to another through an agent’s experience. In contrast to previous approaches, our method is based on two key insights: i) only the first state of the trajectories of the two tasks is paired, while the rest are unpaired and randomly collected, and ii) the transition model of the source task is used to predict the dynamics of the target task, thus aligning the unpaired states and actions. Additionally, this paper intentionally decouples the learning of the state-action corresponce from the transfer technique used, making it easy to combine with any transfer method. Our experiments demonstrate that our approach significantly accelerates transfer learning across a diverse set of problems, varying in state/action representation, physics parameters, and morphology, when compared to state-of-the-art algorithms that rely on cycle-consistency.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过部分配对轨迹学习强化学习控制任务的状态-行为对应关系
在许多强化学习(RL)任务中,状态-动作空间可能会随着时间的推移而变化(例如,可观察特征数量的增加,动作表示的变化)。考虑到这些变化,先前学习的策略可能会由于输入和输出特征的不匹配而失败,并且必须从头开始训练另一个策略,这在样本复杂性方面是低效的。最近在迁移学习方面的工作已经成功地通过整合以前任务的知识使强化学习算法更有效,从而部分缓解了这个问题。然而,这些方法通常必须提供一个任务到另一个任务的显式状态-动作对应。自主代理可能无法访问此类高级信息,但应该能够分析其经验以识别任务之间的相似性。在本文中,我们提出了一种新的方法,通过智能体的经验来自动学习从一个任务到另一个任务的状态和动作的对应关系。与以前的方法相比,我们的方法基于两个关键见解:i)两个任务的轨迹只有第一个状态是成对的,而其余的是不配对的和随机收集的;ii)源任务的转移模型用于预测目标任务的动态,从而对齐未配对的状态和动作。此外,本文有意将状态-行为对应的学习与所使用的迁移技术解耦,使其易于与任何迁移方法相结合。我们的实验表明,与依赖循环一致性的最先进算法相比,我们的方法显着加速了跨各种问题的迁移学习,这些问题在状态/动作表示、物理参数和形态上都有所不同。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
期刊最新文献
A matroid-induced dependence space-based approach to attribute reduction in decision systems A review of recent advances in Gaussian splatting ESMT: Context-adaptive vision-language tracking with episodic-semantic memory ED-SCMA: encoder-decoder with skip-connection and multi-scale attention module for low-dose CT denoising Efficient mining of compact high average utility patterns using the tightest weak lower bound
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1