{"title":"Task-driven Risk-bounded Hierarchical Reinforcement Learning Based on Iterative Refinement","authors":"Viraj Parimi, Sungkweon Hong, Brian Williams","doi":"10.1609/aaaiss.v3i1.31281","DOIUrl":null,"url":null,"abstract":"Deep Reinforcement Learning (DRL) has garnered substantial acclaim for its versatility and widespread applications across diverse domains. Aligned with human-like learning, DRL is grounded in the fundamental principle of learning from interaction, wherein agents dynamically adjust behavior based on environmental feedback in the form of rewards. This iterative trial-and-error process, mirroring human learning, underscores the importance of observation, experimentation, and feedback in shaping understanding and behavior. DRL agents, trained to navigate complex surroundings, refine their knowledge through hierarchical and abstract representations, empowered by deep neural networks. These representations enable efficient handling of long-horizon tasks and flexible adaptation to novel situations, akin to the human ability to construct mental models for comprehending complex concepts and predicting outcomes. Hence, abstract representation building emerges as a critical aspect in the learning processes of both artificial agents and human learners, particularly in long-horizon tasks.\n\nFurthermore, human decision-making, deeply rooted in evolutionary history, exhibits a remarkable capacity to balance the tradeoff between risk and cost across various domains. This cognitive process involves assessing potential negative consequences, evaluating factors such as the likelihood of adverse outcomes, severity of potential harm, and overall uncertainty. Humans intuitively gauge inherent risks and adeptly weigh associated costs, extending beyond monetary expenses to include time, effort, and opportunity costs. The nuanced ability of humans to consider the tradeoff between risk and cost highlights the complexity and adaptability of human decision-making, a skill lacking in typical DRL agents. Principles like these derived from human-like learning present an avenue for inspiring advancements in DRL, fostering the development of more adaptive and intelligent artificial agents.\n\nMotivated by these observations and focusing on practical challenges in robotics, our efforts target risk-aware stochastic sequential decision-making problem which is crucial for tasks with extended time frames and varied strategies. A novel integration of model-based conditional planning with DRL is proposed, inspired by hierarchical techniques. This approach breaks down complex tasks into manageable subtasks(motion primitives), ensuring safety constraints and informed decision-making. Unlike existing methods, our approach addresses motion primitive improvement iteratively, employing diverse prioritization functions to guide the search process effectively. This risk-bounded planning algorithm seamlessly integrates conditional planning and motion primitive learning, prioritizing computational efforts for enhanced efficiency within specified time limits.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"13 18","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Reinforcement Learning (DRL) has garnered substantial acclaim for its versatility and widespread applications across diverse domains. Aligned with human-like learning, DRL is grounded in the fundamental principle of learning from interaction, wherein agents dynamically adjust behavior based on environmental feedback in the form of rewards. This iterative trial-and-error process, mirroring human learning, underscores the importance of observation, experimentation, and feedback in shaping understanding and behavior. DRL agents, trained to navigate complex surroundings, refine their knowledge through hierarchical and abstract representations, empowered by deep neural networks. These representations enable efficient handling of long-horizon tasks and flexible adaptation to novel situations, akin to the human ability to construct mental models for comprehending complex concepts and predicting outcomes. Hence, abstract representation building emerges as a critical aspect in the learning processes of both artificial agents and human learners, particularly in long-horizon tasks.
Furthermore, human decision-making, deeply rooted in evolutionary history, exhibits a remarkable capacity to balance the tradeoff between risk and cost across various domains. This cognitive process involves assessing potential negative consequences, evaluating factors such as the likelihood of adverse outcomes, severity of potential harm, and overall uncertainty. Humans intuitively gauge inherent risks and adeptly weigh associated costs, extending beyond monetary expenses to include time, effort, and opportunity costs. The nuanced ability of humans to consider the tradeoff between risk and cost highlights the complexity and adaptability of human decision-making, a skill lacking in typical DRL agents. Principles like these derived from human-like learning present an avenue for inspiring advancements in DRL, fostering the development of more adaptive and intelligent artificial agents.
Motivated by these observations and focusing on practical challenges in robotics, our efforts target risk-aware stochastic sequential decision-making problem which is crucial for tasks with extended time frames and varied strategies. A novel integration of model-based conditional planning with DRL is proposed, inspired by hierarchical techniques. This approach breaks down complex tasks into manageable subtasks(motion primitives), ensuring safety constraints and informed decision-making. Unlike existing methods, our approach addresses motion primitive improvement iteratively, employing diverse prioritization functions to guide the search process effectively. This risk-bounded planning algorithm seamlessly integrates conditional planning and motion primitive learning, prioritizing computational efforts for enhanced efficiency within specified time limits.