{"title":"Reinforcement Learning Based Robot Control","authors":"Z. Guliyev, Ali Parsayan","doi":"10.1109/AICT55583.2022.10013595","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) has been proven to be a feasible method for learning complicated actions autonomously from sensory observations. Even though many of the deep RL studies have been centered on modelled control and computer games, which has nothing to do with the limits of learning in actual surroundings, deep RL has also revealed its potential in allowing robots to acquire complicated abilities in the real-world situations. Real-world robotics, on the other hand, is an intriguing area for testing the algorithms of this kind, because it is directly related to the learning procedure of humans. Deep RL might enable developing movement abilities without a precise modelling of the robot dynamics and with minimum engineering. However, because of hyper-parameter sensitivity and low sampling capability, it is difficult to implement deep RL to robotic tasks involving real-world applications. It is comparable simple to tune hyper-parameters in simulations, while it can be a challenging task when it comes to physical world, for example, biped robots. Acquiring the ability to move and perceive in the actual world involves a variety of difficulties, some are simpler to handle than others that are frequently overlooked in RL studies which are limited to simulated environments. This paper provides approaches to deal with a variety of frequent difficulties in deep RL arising while training a biped robot to walk and follow a specific path.","PeriodicalId":441475,"journal":{"name":"2022 IEEE 16th International Conference on Application of Information and Communication Technologies (AICT)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 16th International Conference on Application of Information and Communication Technologies (AICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICT55583.2022.10013595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement learning (RL) has been proven to be a feasible method for learning complicated actions autonomously from sensory observations. Even though many of the deep RL studies have been centered on modelled control and computer games, which has nothing to do with the limits of learning in actual surroundings, deep RL has also revealed its potential in allowing robots to acquire complicated abilities in the real-world situations. Real-world robotics, on the other hand, is an intriguing area for testing the algorithms of this kind, because it is directly related to the learning procedure of humans. Deep RL might enable developing movement abilities without a precise modelling of the robot dynamics and with minimum engineering. However, because of hyper-parameter sensitivity and low sampling capability, it is difficult to implement deep RL to robotic tasks involving real-world applications. It is comparable simple to tune hyper-parameters in simulations, while it can be a challenging task when it comes to physical world, for example, biped robots. Acquiring the ability to move and perceive in the actual world involves a variety of difficulties, some are simpler to handle than others that are frequently overlooked in RL studies which are limited to simulated environments. This paper provides approaches to deal with a variety of frequent difficulties in deep RL arising while training a biped robot to walk and follow a specific path.