R. Bonini, Felipe Leno da Silva, R. Glatt, Edison Spina, Anna Helena Reali Costa
{"title":"A Framework to Discover and Reuse Object-Oriented Options in Reinforcement Learning","authors":"R. Bonini, Felipe Leno da Silva, R. Glatt, Edison Spina, Anna Helena Reali Costa","doi":"10.1109/BRACIS.2018.00027","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning is a successful yet slow technique to train autonomous agents. Option-based solutions can be used to accelerate learning and to transfer learned behaviors across tasks by encapsulating a partial policy. However, commonly these options are specific for a single task, do not take in account similar features between tasks and may not correspond exactly to an optimal behavior when transferred to another task. Therefore, unprincipled transfer might provide bad options to the agent, hampering the learning process. We here propose a way to discover and reuse learned object-oriented options in aprobabilistic way in order to enable better actuation choices to the agent in multiple different tasks. Our experimental evaluation show that our proposal is able to learn and successfully reuse options across different tasks.","PeriodicalId":405190,"journal":{"name":"2018 7th Brazilian Conference on Intelligent Systems (BRACIS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th Brazilian Conference on Intelligent Systems (BRACIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BRACIS.2018.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Reinforcement Learning is a successful yet slow technique to train autonomous agents. Option-based solutions can be used to accelerate learning and to transfer learned behaviors across tasks by encapsulating a partial policy. However, commonly these options are specific for a single task, do not take in account similar features between tasks and may not correspond exactly to an optimal behavior when transferred to another task. Therefore, unprincipled transfer might provide bad options to the agent, hampering the learning process. We here propose a way to discover and reuse learned object-oriented options in aprobabilistic way in order to enable better actuation choices to the agent in multiple different tasks. Our experimental evaluation show that our proposal is able to learn and successfully reuse options across different tasks.