{"title":"基于深度q -学习和神经网络的轨迹规划框架研究","authors":"Venkata Satya Rahul Kosuru, Ashwin Kavasseri Venkitaraman","doi":"10.24018/ejeng.2022.7.6.2944","DOIUrl":null,"url":null,"abstract":"With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances. Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation. A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.","PeriodicalId":12001,"journal":{"name":"European Journal of Engineering and Technology Research","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Developing a Deep Q-Learning and Neural Network Framework for Trajectory Planning\",\"authors\":\"Venkata Satya Rahul Kosuru, Ashwin Kavasseri Venkitaraman\",\"doi\":\"10.24018/ejeng.2022.7.6.2944\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances. Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation. A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.\",\"PeriodicalId\":12001,\"journal\":{\"name\":\"European Journal of Engineering and Technology Research\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Engineering and Technology Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24018/ejeng.2022.7.6.2944\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Engineering and Technology Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24018/ejeng.2022.7.6.2944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Developing a Deep Q-Learning and Neural Network Framework for Trajectory Planning
With the recent expansion in Self-Driving and Autonomy field, every vehicle is occupied with some kind or alter driver assist features in order to compensate driver comfort. Expansion further to fully Autonomy is extremely complicated since it requires planning safe paths in unstable and dynamic environments. Impression learning and other path learning techniques lack generalization and safety assurances. Selecting the model and avoiding obstacles are two difficult issues in the research of autonomous vehicles. Q-learning has evolved into a potent learning framework that can now acquire complicated strategies in high-dimensional contexts to the advent of deep feature representation. A deep Q-learning approach is proposed in this study by using experienced replay and contextual expertise to address these issues. A path planning strategy utilizing deep Q-learning on the network edge node is proposed to enhance the driving performance of autonomous vehicles in terms of energy consumption. When linked vehicles maintain the recommended speed, the suggested approach simulates the trajectory using a proportional-integral-derivative (PID) concept controller. Smooth trajectory and reduced jerk are ensured when employing the PID controller to monitor the terminals. The computational findings demonstrate that, in contrast to traditional techniques, the approach could investigate a path in an unknown situation with small iterations and a higher average payoff. It can also more quickly converge to an ideal strategic plan.