Shuyuan Xu , Qiao Liu , Yuhui Hu , Mengtian Xu , Jiachen Hao
{"title":"基于分布式强化学习的感知不确定性决策模型","authors":"Shuyuan Xu , Qiao Liu , Yuhui Hu , Mengtian Xu , Jiachen Hao","doi":"10.1016/j.geits.2022.100062","DOIUrl":null,"url":null,"abstract":"<div><p>Decision-making for autonomous vehicles in the presence of obstacle occlusions is difficult because the lack of accurate information affects the judgment. Existing methods may lead to overly conservative strategies and time-consuming computations that cannot be balanced with efficiency. We propose to use distributional reinforcement learning to hedge the risk of strategies, optimize the worse cases, and improve the efficiency of the algorithm so that the agent learns better actions. A batch of smaller values is used to replace the average value to optimize the worse case, and combined with frame stacking, we call it Efficient-Fully parameterized Quantile Function (E-FQF). This model is used to evaluate signal-free intersection crossing scenarios and makes more efficient moves and reduces the collision rate compared to conventional reinforcement learning algorithms in the presence of perceived occlusion. The model also has robustness in the case of data loss compared to the method with embedded long and short term memory.</p></div>","PeriodicalId":100596,"journal":{"name":"Green Energy and Intelligent Transportation","volume":"2 2","pages":"Article 100062"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Decision-making models on perceptual uncertainty with distributional reinforcement learning\",\"authors\":\"Shuyuan Xu , Qiao Liu , Yuhui Hu , Mengtian Xu , Jiachen Hao\",\"doi\":\"10.1016/j.geits.2022.100062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Decision-making for autonomous vehicles in the presence of obstacle occlusions is difficult because the lack of accurate information affects the judgment. Existing methods may lead to overly conservative strategies and time-consuming computations that cannot be balanced with efficiency. We propose to use distributional reinforcement learning to hedge the risk of strategies, optimize the worse cases, and improve the efficiency of the algorithm so that the agent learns better actions. A batch of smaller values is used to replace the average value to optimize the worse case, and combined with frame stacking, we call it Efficient-Fully parameterized Quantile Function (E-FQF). This model is used to evaluate signal-free intersection crossing scenarios and makes more efficient moves and reduces the collision rate compared to conventional reinforcement learning algorithms in the presence of perceived occlusion. The model also has robustness in the case of data loss compared to the method with embedded long and short term memory.</p></div>\",\"PeriodicalId\":100596,\"journal\":{\"name\":\"Green Energy and Intelligent Transportation\",\"volume\":\"2 2\",\"pages\":\"Article 100062\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Green Energy and Intelligent Transportation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2773153722000627\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Green Energy and Intelligent Transportation","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2773153722000627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decision-making models on perceptual uncertainty with distributional reinforcement learning
Decision-making for autonomous vehicles in the presence of obstacle occlusions is difficult because the lack of accurate information affects the judgment. Existing methods may lead to overly conservative strategies and time-consuming computations that cannot be balanced with efficiency. We propose to use distributional reinforcement learning to hedge the risk of strategies, optimize the worse cases, and improve the efficiency of the algorithm so that the agent learns better actions. A batch of smaller values is used to replace the average value to optimize the worse case, and combined with frame stacking, we call it Efficient-Fully parameterized Quantile Function (E-FQF). This model is used to evaluate signal-free intersection crossing scenarios and makes more efficient moves and reduces the collision rate compared to conventional reinforcement learning algorithms in the presence of perceived occlusion. The model also has robustness in the case of data loss compared to the method with embedded long and short term memory.