{"title":"Q-learning of complex behaviours on a six-legged walking machine","authors":"F. Kirchner","doi":"10.1109/EURBOT.1997.633565","DOIUrl":null,"url":null,"abstract":"We present work on a six-legged walking machine that uses a hierarchical version of Q-learning (HQL) to learn both the elementary swing and stance movements of individual legs as well as the overall coordination scheme to perform forward movements. The architecture consists of a hierarchy of local controllers implemented in layers. The lowest layer consists of control modules performing elementary actions, like moving a leg up, down, left or right to achieve the elementary swing and stance motions for individual legs. The next level consists of controllers that learn to perform more complex tasks like forward movement by using the previously learned, lower level modules. On the third the highest layer in the architecture presented here the previously learned complex movements are themselves reused to achieve goals in the environment using external sensory input. The work is related to similar, although simulation-based, work by Lin (1993) on hierarchical reinforcement learning and Singh (1994) on compositional Q-learning. We report on the HQL architecture as well as on its implementation on the walking machine SIR ARTHUR. Results from experiments carried out on the real robot are reported to show the applicability of the HQL approach to real world robot problems.","PeriodicalId":129683,"journal":{"name":"Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"51","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EURBOT.1997.633565","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 51
Abstract
We present work on a six-legged walking machine that uses a hierarchical version of Q-learning (HQL) to learn both the elementary swing and stance movements of individual legs as well as the overall coordination scheme to perform forward movements. The architecture consists of a hierarchy of local controllers implemented in layers. The lowest layer consists of control modules performing elementary actions, like moving a leg up, down, left or right to achieve the elementary swing and stance motions for individual legs. The next level consists of controllers that learn to perform more complex tasks like forward movement by using the previously learned, lower level modules. On the third the highest layer in the architecture presented here the previously learned complex movements are themselves reused to achieve goals in the environment using external sensory input. The work is related to similar, although simulation-based, work by Lin (1993) on hierarchical reinforcement learning and Singh (1994) on compositional Q-learning. We report on the HQL architecture as well as on its implementation on the walking machine SIR ARTHUR. Results from experiments carried out on the real robot are reported to show the applicability of the HQL approach to real world robot problems.