Durgakant Pushp, Junhong Xu, Zheng Chen, Lantao Liu
{"title":"Context-Generative Default Policy for Bounded Rational Agent","authors":"Durgakant Pushp, Junhong Xu, Zheng Chen, Lantao Liu","doi":"arxiv-2409.11604","DOIUrl":null,"url":null,"abstract":"Bounded rational agents often make decisions by evaluating a finite selection\nof choices, typically derived from a reference point termed the $`$default\npolicy,' based on previous experience. However, the inherent rigidity of the\nstatic default policy presents significant challenges for agents when operating\nin unknown environment, that are not included in agent's prior knowledge. In\nthis work, we introduce a context-generative default policy that leverages the\nregion observed by the robot to predict unobserved part of the environment,\nthereby enabling the robot to adaptively adjust its default policy based on\nboth the actual observed map and the $\\textit{imagined}$ unobserved map.\nFurthermore, the adaptive nature of the bounded rationality framework enables\nthe robot to manage unreliable or incorrect imaginations by selectively\nsampling a few trajectories in the vicinity of the default policy. Our approach\nutilizes a diffusion model for map prediction and a sampling-based planning\nwith B-spline trajectory optimization to generate the default policy. Extensive\nevaluations reveal that the context-generative policy outperforms the baseline\nmethods in identifying and avoiding unseen obstacles. Additionally, real-world\nexperiments conducted with the Crazyflie drones demonstrate the adaptability of\nour proposed method, even when acting in environments outside the domain of the\ntraining distribution.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11604","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Bounded rational agents often make decisions by evaluating a finite selection
of choices, typically derived from a reference point termed the $`$default
policy,' based on previous experience. However, the inherent rigidity of the
static default policy presents significant challenges for agents when operating
in unknown environment, that are not included in agent's prior knowledge. In
this work, we introduce a context-generative default policy that leverages the
region observed by the robot to predict unobserved part of the environment,
thereby enabling the robot to adaptively adjust its default policy based on
both the actual observed map and the $\textit{imagined}$ unobserved map.
Furthermore, the adaptive nature of the bounded rationality framework enables
the robot to manage unreliable or incorrect imaginations by selectively
sampling a few trajectories in the vicinity of the default policy. Our approach
utilizes a diffusion model for map prediction and a sampling-based planning
with B-spline trajectory optimization to generate the default policy. Extensive
evaluations reveal that the context-generative policy outperforms the baseline
methods in identifying and avoiding unseen obstacles. Additionally, real-world
experiments conducted with the Crazyflie drones demonstrate the adaptability of
our proposed method, even when acting in environments outside the domain of the
training distribution.