J. L. Pedreño-Molina, A. Guerrero-González, J. López-Coronado
{"title":"用于冗余机械臂视觉运动映射的自适应伸展模型","authors":"J. L. Pedreño-Molina, A. Guerrero-González, J. López-Coronado","doi":"10.1109/ROMOCO.2002.1177144","DOIUrl":null,"url":null,"abstract":"Most of the control algorithms for robotic reaching and grasping tasks, from visual and motor perception systems, are based on feedback systems. They assume a limitation for the performance of remote reaching applications and for the robustness of the system. In this paper, a very robust learning-based model for visual-motor coordination is presented. This architecture is based on how the human system projects the sensorial stimulus onto motor joints and how it sends motor commands to each arm in open-loop mode, starting from the initial, visual and proprioceptive information. The self-organization characteristics of this model allow one to obtain good results on robustness, flexibility and adaptability in both simulation and real robotic platforms. Coordination of the information from different spatial representations is based on the vector associative maps algorithms, developed in CNS (Boston University). Indeed, compatibility requirements and system adaptation capability give a solution for the control of redundant systems.","PeriodicalId":213750,"journal":{"name":"Proceedings of the Third International Workshop on Robot Motion and Control, 2002. RoMoCo '02.","volume":"87 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive reaching model for visual-motor mapping applied to redundant robotic arms\",\"authors\":\"J. L. Pedreño-Molina, A. Guerrero-González, J. López-Coronado\",\"doi\":\"10.1109/ROMOCO.2002.1177144\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most of the control algorithms for robotic reaching and grasping tasks, from visual and motor perception systems, are based on feedback systems. They assume a limitation for the performance of remote reaching applications and for the robustness of the system. In this paper, a very robust learning-based model for visual-motor coordination is presented. This architecture is based on how the human system projects the sensorial stimulus onto motor joints and how it sends motor commands to each arm in open-loop mode, starting from the initial, visual and proprioceptive information. The self-organization characteristics of this model allow one to obtain good results on robustness, flexibility and adaptability in both simulation and real robotic platforms. Coordination of the information from different spatial representations is based on the vector associative maps algorithms, developed in CNS (Boston University). Indeed, compatibility requirements and system adaptation capability give a solution for the control of redundant systems.\",\"PeriodicalId\":213750,\"journal\":{\"name\":\"Proceedings of the Third International Workshop on Robot Motion and Control, 2002. RoMoCo '02.\",\"volume\":\"87 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Third International Workshop on Robot Motion and Control, 2002. RoMoCo '02.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROMOCO.2002.1177144\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third International Workshop on Robot Motion and Control, 2002. RoMoCo '02.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMOCO.2002.1177144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive reaching model for visual-motor mapping applied to redundant robotic arms
Most of the control algorithms for robotic reaching and grasping tasks, from visual and motor perception systems, are based on feedback systems. They assume a limitation for the performance of remote reaching applications and for the robustness of the system. In this paper, a very robust learning-based model for visual-motor coordination is presented. This architecture is based on how the human system projects the sensorial stimulus onto motor joints and how it sends motor commands to each arm in open-loop mode, starting from the initial, visual and proprioceptive information. The self-organization characteristics of this model allow one to obtain good results on robustness, flexibility and adaptability in both simulation and real robotic platforms. Coordination of the information from different spatial representations is based on the vector associative maps algorithms, developed in CNS (Boston University). Indeed, compatibility requirements and system adaptation capability give a solution for the control of redundant systems.