{"title":"基于上下文的自适应机器人行为学习模型(CARB-LM)","authors":"Joohee Suh, Dean Frederick Hougen","doi":"10.1109/CICA.2014.7013253","DOIUrl":null,"url":null,"abstract":"An important, long-term objective of intelligent robotics is to develop robots that can learn about and adapt to new environments. We focus on developing a learning model that can build up new knowledge through direct experience with and feedback from an environment. We designed and constructed Context-based Adaptive Robot Behavior-Learning Model (CARB-LM) which is conceptually inspired by Hebbian and anti-Hebbian learning and by neuromodulation in neural networks. CARB-LM has two types of learning processes: (1) context-based learning and (2) reward-based learning. The former uses past accumulated positive experiences as analogies to current conditions, allowing the robot to infer likely rewarding behaviors, and the latter exploits current reward information so the robot can refine its behaviors based on current experience. The reward is acquired by checking the effect of the robot's behavior in the environment. As a first test of this model, we tasked a simulated TurtleBot robot with moving smoothly around a previously unexplored environment. We simulated this environment using ROS and Gazebo and performed experiments to evaluate the model. The robot showed substantial learning and greatly outperformed both a hand-coded controller and a randomly wandering robot.","PeriodicalId":340740,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Context-based adaptive robot behavior learning model (CARB-LM)\",\"authors\":\"Joohee Suh, Dean Frederick Hougen\",\"doi\":\"10.1109/CICA.2014.7013253\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An important, long-term objective of intelligent robotics is to develop robots that can learn about and adapt to new environments. We focus on developing a learning model that can build up new knowledge through direct experience with and feedback from an environment. We designed and constructed Context-based Adaptive Robot Behavior-Learning Model (CARB-LM) which is conceptually inspired by Hebbian and anti-Hebbian learning and by neuromodulation in neural networks. CARB-LM has two types of learning processes: (1) context-based learning and (2) reward-based learning. The former uses past accumulated positive experiences as analogies to current conditions, allowing the robot to infer likely rewarding behaviors, and the latter exploits current reward information so the robot can refine its behaviors based on current experience. The reward is acquired by checking the effect of the robot's behavior in the environment. As a first test of this model, we tasked a simulated TurtleBot robot with moving smoothly around a previously unexplored environment. We simulated this environment using ROS and Gazebo and performed experiments to evaluate the model. The robot showed substantial learning and greatly outperformed both a hand-coded controller and a randomly wandering robot.\",\"PeriodicalId\":340740,\"journal\":{\"name\":\"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CICA.2014.7013253\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICA.2014.7013253","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Context-based adaptive robot behavior learning model (CARB-LM)
An important, long-term objective of intelligent robotics is to develop robots that can learn about and adapt to new environments. We focus on developing a learning model that can build up new knowledge through direct experience with and feedback from an environment. We designed and constructed Context-based Adaptive Robot Behavior-Learning Model (CARB-LM) which is conceptually inspired by Hebbian and anti-Hebbian learning and by neuromodulation in neural networks. CARB-LM has two types of learning processes: (1) context-based learning and (2) reward-based learning. The former uses past accumulated positive experiences as analogies to current conditions, allowing the robot to infer likely rewarding behaviors, and the latter exploits current reward information so the robot can refine its behaviors based on current experience. The reward is acquired by checking the effect of the robot's behavior in the environment. As a first test of this model, we tasked a simulated TurtleBot robot with moving smoothly around a previously unexplored environment. We simulated this environment using ROS and Gazebo and performed experiments to evaluate the model. The robot showed substantial learning and greatly outperformed both a hand-coded controller and a randomly wandering robot.