Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594436
N. Cho, Sang Hyoung Lee, Tae-Joung Kwon, I. Suh, Hong-Seok Kim
In this paper, we propose a method to model social interaction between a human and a virtual avatar. To this end, two human performers fist perform social interactions according to the Learning from Demonstration paradigm. Then, the relative relevance of all joints of both performers should be reasonably modeled based on human demonstrations. However, among all possible combinations of relative joints, it is necessary to select only some of the combinations that play key roles in social interaction. We select such significant features based on the joint motion significance, which is a metric to measure the significance degree by calculating both temporal entropy and spatial entropy of all human joints from a Gaussian mixture model. To evaluate our proposed method, we performed experiments on five social interactions: hand shaking, hand slapping, shoulder holding, object passing, and target kicking. In addition, we compared our method to existing modeling methods using different metrics, such as principal component analysis and information gain.
{"title":"Modeling Social Interaction Based on Joint Motion Significance","authors":"N. Cho, Sang Hyoung Lee, Tae-Joung Kwon, I. Suh, Hong-Seok Kim","doi":"10.1109/IROS.2018.8594436","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594436","url":null,"abstract":"In this paper, we propose a method to model social interaction between a human and a virtual avatar. To this end, two human performers fist perform social interactions according to the Learning from Demonstration paradigm. Then, the relative relevance of all joints of both performers should be reasonably modeled based on human demonstrations. However, among all possible combinations of relative joints, it is necessary to select only some of the combinations that play key roles in social interaction. We select such significant features based on the joint motion significance, which is a metric to measure the significance degree by calculating both temporal entropy and spatial entropy of all human joints from a Gaussian mixture model. To evaluate our proposed method, we performed experiments on five social interactions: hand shaking, hand slapping, shoulder holding, object passing, and target kicking. In addition, we compared our method to existing modeling methods using different metrics, such as principal component analysis and information gain.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"45 1","pages":"3373-3380"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78716026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593839
H. Hong, B. Lee
Distribution-to-distribution normal distributions transform (NDT-D2D) is one of the fast point set registrations. Since the normal distributions transform (NDT) is a set of normal distributions generated by discrete and regular cells, local minima of the objective function is an issue of NDT-D2D. Also, we found that the objective function based on L2 distance between distributions has a negative correlation with rotational alignment. To overcome the problems, we present a method using dynamic scaling factors of covariances to improve the accuracy of NDT-D2D. Two scaling factors are defined for the preceding and current NDTs respectively, and they are dynamically varied in each iteration of NDT-D2D. We implemented the proposed method based on conventional NDT-D2D and probabilistic NDT-D2D and compared to the NDT-D2D with fixed scaling factors using KITTI benchmark data set. Also, we experimented estimating odometry with an initial guess as an application of distribution-to-distribution probabilistic NDT (PNDT-D2D) with the proposed method. As a result, the proposed method improves both translational and rotational accuracy of the NDT-D2D and PNDT-D2D.
{"title":"Dynamic Scaling Factors of Covariances for Accurate 3D Normal Distributions Transform Registration","authors":"H. Hong, B. Lee","doi":"10.1109/IROS.2018.8593839","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593839","url":null,"abstract":"Distribution-to-distribution normal distributions transform (NDT-D2D) is one of the fast point set registrations. Since the normal distributions transform (NDT) is a set of normal distributions generated by discrete and regular cells, local minima of the objective function is an issue of NDT-D2D. Also, we found that the objective function based on L2 distance between distributions has a negative correlation with rotational alignment. To overcome the problems, we present a method using dynamic scaling factors of covariances to improve the accuracy of NDT-D2D. Two scaling factors are defined for the preceding and current NDTs respectively, and they are dynamically varied in each iteration of NDT-D2D. We implemented the proposed method based on conventional NDT-D2D and probabilistic NDT-D2D and compared to the NDT-D2D with fixed scaling factors using KITTI benchmark data set. Also, we experimented estimating odometry with an initial guess as an application of distribution-to-distribution probabilistic NDT (PNDT-D2D) with the proposed method. As a result, the proposed method improves both translational and rotational accuracy of the NDT-D2D and PNDT-D2D.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"43 1","pages":"1190-1196"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78204827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593961
F. Arvin, A. E. Turgut, T. Krajník, Salar Rahimi, Ilkin Ege Okay, Shigang Yue, S. Watson, B. Lennox
In this paper, we proposed a pheromone-based aggregation method based on the state-of-the-art BEECLUST algorithm. We investigated the impact of pheromone-based communication on the efficiency of robotic swarms to locate and aggregate at areas with a given cue. In particular, we evaluated the impact of the pheromone evaporation and diffusion on the time required for the swarm to aggregate. In a series of simulated and real-world evaluation trials, we demonstrated that augmenting the BEECLUST method with artificial pheromone resulted in faster aggregation times.
{"title":"$Phi$ Clust: Pheromone-Based Aggregation for Robotic Swarms","authors":"F. Arvin, A. E. Turgut, T. Krajník, Salar Rahimi, Ilkin Ege Okay, Shigang Yue, S. Watson, B. Lennox","doi":"10.1109/IROS.2018.8593961","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593961","url":null,"abstract":"In this paper, we proposed a pheromone-based aggregation method based on the state-of-the-art BEECLUST algorithm. We investigated the impact of pheromone-based communication on the efficiency of robotic swarms to locate and aggregate at areas with a given cue. In particular, we evaluated the impact of the pheromone evaporation and diffusion on the time required for the swarm to aggregate. In a series of simulated and real-world evaluation trials, we demonstrated that augmenting the BEECLUST method with artificial pheromone resulted in faster aggregation times.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"582 1","pages":"4288-4294"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75931339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594432
N. García, R. Suárez, J. Rosell
This paper addresses the problem of obtaining human-like motions on hand-arm robotic systems performing grasping actions. The focus is set on the coordinated movements of the robotic arm and the anthropomorphic mechanical hand, with which the arm is equipped. For this, human movements performing different grasps are captured and mapped to the robot in order to compute the human hand synergies. These synergies are used to both obtain human-like movements and to reduce the complexity of the planning phase by reducing the dimension of the search space. In addition, the paper proposes a sampling-based planner, which guides the motion planning following the synergies and considering different types of grasps. The introduced approach is tested in an application example and thoroughly compared with a state-of-the-art planning algorithm, obtaining better results.
{"title":"Planning Hand-Arm Grasping Motions with Human-Like Appearance","authors":"N. García, R. Suárez, J. Rosell","doi":"10.1109/IROS.2018.8594432","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594432","url":null,"abstract":"This paper addresses the problem of obtaining human-like motions on hand-arm robotic systems performing grasping actions. The focus is set on the coordinated movements of the robotic arm and the anthropomorphic mechanical hand, with which the arm is equipped. For this, human movements performing different grasps are captured and mapped to the robot in order to compute the human hand synergies. These synergies are used to both obtain human-like movements and to reduce the complexity of the planning phase by reducing the dimension of the search space. In addition, the paper proposes a sampling-based planner, which guides the motion planning following the synergies and considering different types of grasps. The introduced approach is tested in an application example and thoroughly compared with a state-of-the-art planning algorithm, obtaining better results.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"337 1","pages":"3517-3522"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75935270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594047
Bumjin Jang, Amanda Aho, B. Nelson, S. Pané
Small-scale robots with soft joints and hinges have recently attracted interest because these components allow for more sophisticated locomotion mechanisms. Here, we investigate two different types of nanoscale swimmers as depicted in Figure 1. One consists of a rigid magnetic head linked to a semi-soft tail (1-link swimmer). Another consists of a rigid magnetic head and tail connected by a soft hinge (2-link swimmer). Both swimmers exhibit undulatory locomotion under an applied oscillating magnetic field. The speeds of the swimmers are assessed as a function of the oscillating magnetic field frequency and the sweeping angle. We find that a resonance-like frequency increases as the length decreases, and, in general, the speed increases as the sweeping angle increases. Last, we show that 2-link swimmers can also swim in a corkscrew-like pattern under rotating magnetic fields.
{"title":"Fabrication and Locomotion of Flexible Nanoswimmers","authors":"Bumjin Jang, Amanda Aho, B. Nelson, S. Pané","doi":"10.1109/IROS.2018.8594047","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594047","url":null,"abstract":"Small-scale robots with soft joints and hinges have recently attracted interest because these components allow for more sophisticated locomotion mechanisms. Here, we investigate two different types of nanoscale swimmers as depicted in Figure 1. One consists of a rigid magnetic head linked to a semi-soft tail (1-link swimmer). Another consists of a rigid magnetic head and tail connected by a soft hinge (2-link swimmer). Both swimmers exhibit undulatory locomotion under an applied oscillating magnetic field. The speeds of the swimmers are assessed as a function of the oscillating magnetic field frequency and the sweeping angle. We find that a resonance-like frequency increases as the length decreases, and, in general, the speed increases as the sweeping angle increases. Last, we show that 2-link swimmers can also swim in a corkscrew-like pattern under rotating magnetic fields.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"29 1","pages":"6193-6198"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72633631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/iros.2018.8594286
{"title":"3. Conference Application","authors":"","doi":"10.1109/iros.2018.8594286","DOIUrl":"https://doi.org/10.1109/iros.2018.8594286","url":null,"abstract":"","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"275 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77550628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594039
S. Schulz, A. Seibel, J. Schlattmann
The direct kinematics problem of the Stewart-Gough platform can be solved by measuring the manipulator platform's orientation and two of the linear actuators' orientations instead of the six linear actuators' lengths. In this paper, the effect of measurement errors on the calculated manipulator platform's pose is investigated using the Cramer-Ran lower bound and extensive experiments on a state-of-the-art Stewart-Gough platform. Furthermore, different algorithms and filters for one-time as well as continuous pose determinations are investigated. Finally, possible sensor fusion concepts for the one-time pose determination are presented to increase the robustness against noise and measurement errors.
{"title":"Performance of an IMU-Based Sensor Concept for Solving the Direct Kinematics Problem of the Stewart-Gough Platform","authors":"S. Schulz, A. Seibel, J. Schlattmann","doi":"10.1109/IROS.2018.8594039","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594039","url":null,"abstract":"The direct kinematics problem of the Stewart-Gough platform can be solved by measuring the manipulator platform's orientation and two of the linear actuators' orientations instead of the six linear actuators' lengths. In this paper, the effect of measurement errors on the calculated manipulator platform's pose is investigated using the Cramer-Ran lower bound and extensive experiments on a state-of-the-art Stewart-Gough platform. Furthermore, different algorithms and filters for one-time as well as continuous pose determinations are investigated. Finally, possible sensor fusion concepts for the one-time pose determination are presented to increase the robustness against noise and measurement errors.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"605 1","pages":"5055-5062"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77624474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593965
Kaihong Huang, C. Stachniss
Pose estimation and mapping are key capabilities of most autonomous vehicles and thus a number of localization and SLAM algorithms have been developed in the past. Autonomous robots and cars are typically equipped with multiple sensors. Often, the sensor suite includes a camera and a laser range finder. In this paper, we consider the problem of incremental ego-motion estimation, using both, a monocular camera and a laser range finder jointly. We propose a new algorithm, that exploits the advantages of both sensors-the ability of cameras to determine orientations well and the ability of laser range finders to estimate the scale and to directly obtain 3D point clouds. Our approach estimates the 5 degrees of freedom relative orientation from image pairs through feature point correspondences and formulates the remaining scale estimation as a new variant of the iterative closest point problem with only one degree of freedom. We furthermore exploit the camera information in a new way to constrain the data association between laser point clouds. The experiments presented in this paper suggest that our approach is able to accurately estimate the ego-motion of a vehicle and that we obtain more accurate frame-to-frame alignments than with one sensor modality alone.
{"title":"Joint Ego-motion Estimation Using a Laser Scanner and a Monocular Camera Through Relative Orientation Estimation and 1-DoF ICP","authors":"Kaihong Huang, C. Stachniss","doi":"10.1109/IROS.2018.8593965","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593965","url":null,"abstract":"Pose estimation and mapping are key capabilities of most autonomous vehicles and thus a number of localization and SLAM algorithms have been developed in the past. Autonomous robots and cars are typically equipped with multiple sensors. Often, the sensor suite includes a camera and a laser range finder. In this paper, we consider the problem of incremental ego-motion estimation, using both, a monocular camera and a laser range finder jointly. We propose a new algorithm, that exploits the advantages of both sensors-the ability of cameras to determine orientations well and the ability of laser range finders to estimate the scale and to directly obtain 3D point clouds. Our approach estimates the 5 degrees of freedom relative orientation from image pairs through feature point correspondences and formulates the remaining scale estimation as a new variant of the iterative closest point problem with only one degree of freedom. We furthermore exploit the camera information in a new way to constrain the data association between laser point clouds. The experiments presented in this paper suggest that our approach is able to accurately estimate the ego-motion of a vehicle and that we obtain more accurate frame-to-frame alignments than with one sensor modality alone.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"19 1","pages":"671-677"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77763145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594015
M. Ahn, Hosik Chae, D. Hong
This paper presents a complete motion planning approach for quadruped locomotion across an unknown terrain using a framework based on mixed-integer convex optimization and visual feedback. Vision data is used to find convex polygons in the surrounding environment, which acts as potentially feasible foothold regions. Then, a goal position is initially provided, which the best feasible destination planner uses to solve for an actual feasible goal position based on the extracted polygons. Next, a footstep planner uses the feasible goal position to plan a fixed number of footsteps, which may or may not result in the robot reaching the position. The center of mass (COM) trajectory planner using quadratic programming is extended to solve for a trajectory in 3D space while maintaining convexity, which reduces the computation time, allowing the robot to plan and execute motions online. The suggested method is implemented as a policy rather than a path planner, but its performance as a path planner is also shown. The approach is verified on both simulation and on a physical robot, ALPHRED, walking on various unknown terrains.
{"title":"Stable, Autonomous, Unknown Terrain Locomotion for Quadrupeds Based on Visual Feedback and Mixed-Integer Convex Optimization","authors":"M. Ahn, Hosik Chae, D. Hong","doi":"10.1109/IROS.2018.8594015","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594015","url":null,"abstract":"This paper presents a complete motion planning approach for quadruped locomotion across an unknown terrain using a framework based on mixed-integer convex optimization and visual feedback. Vision data is used to find convex polygons in the surrounding environment, which acts as potentially feasible foothold regions. Then, a goal position is initially provided, which the best feasible destination planner uses to solve for an actual feasible goal position based on the extracted polygons. Next, a footstep planner uses the feasible goal position to plan a fixed number of footsteps, which may or may not result in the robot reaching the position. The center of mass (COM) trajectory planner using quadratic programming is extended to solve for a trajectory in 3D space while maintaining convexity, which reduces the computation time, allowing the robot to plan and execute motions online. The suggested method is implemented as a policy rather than a path planner, but its performance as a path planner is also shown. The approach is verified on both simulation and on a physical robot, ALPHRED, walking on various unknown terrains.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"26 1","pages":"3791-3798"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77961423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594019
Francisco Naveros, J. Garrido, A. Arleo, E. Ros, N. Luque
Studying and understanding the computational primitives of our neural system requires for a diverse and complementary set of techniques. In this work, we use the Neuro-robotic Platform (NRP)to evaluate the vestibulo ocular cerebellar adaptatIon (Vestibulo-ocular reflex, VOR)mediated by two STDP mechanisms located at the cerebellar molecular layer and the vestibular nuclei respectively. This simulation study adopts an experimental setup (rotatory VOR)widely used by neuroscientists to better understand the contribution of certain specific cerebellar properties (i.e. distributed STDP, neural properties, coding cerebellar topology, etc.)to r-VOR adaptation. The work proposes and describes an embodiment solution for which we endow a simulated humanoid robot (iCub)with a spiking cerebellar model by means of the NRP, and we face the humanoid to an r-VOR task. The results validate the adaptive capabilities of the spiking cerebellar model (with STDP)in a perception-action closed-loop (r- VOR)causing the simulated iCub robot to mimic a human behavior.
{"title":"Exploring Vestibulo-Ocular Adaptation in a Closed-Loop Neuro-Robotic Experiment Using STDP. A Simulation Study","authors":"Francisco Naveros, J. Garrido, A. Arleo, E. Ros, N. Luque","doi":"10.1109/IROS.2018.8594019","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594019","url":null,"abstract":"Studying and understanding the computational primitives of our neural system requires for a diverse and complementary set of techniques. In this work, we use the Neuro-robotic Platform (NRP)to evaluate the vestibulo ocular cerebellar adaptatIon (Vestibulo-ocular reflex, VOR)mediated by two STDP mechanisms located at the cerebellar molecular layer and the vestibular nuclei respectively. This simulation study adopts an experimental setup (rotatory VOR)widely used by neuroscientists to better understand the contribution of certain specific cerebellar properties (i.e. distributed STDP, neural properties, coding cerebellar topology, etc.)to r-VOR adaptation. The work proposes and describes an embodiment solution for which we endow a simulated humanoid robot (iCub)with a spiking cerebellar model by means of the NRP, and we face the humanoid to an r-VOR task. The results validate the adaptive capabilities of the spiking cerebellar model (with STDP)in a perception-action closed-loop (r- VOR)causing the simulated iCub robot to mimic a human behavior.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"12 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80139526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}