Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090537
De C. Li, Yu Q. He, Feng Fu
In this paper, a mutual information (MI) and Extreme Learning Machine (ELM) based inverse reinforcement learning (IRL) algorithm, which termed as MEIRL, is proposed to construct nonlinear reward function. The basic idea of MIIRL is that, similar to GPIRL, the reward function is learned by using Gaussian process and the importance of each feature is obtained by using automatic relevance determination (ARD). Then mutual information is employed to evaluate the impact of each feature to the reward function, based on which extreme learning machine is introduced along with an adaptive model construction procedure to choose the optimal subset of features and the performance of the original GPIRL algorithm is enhanced as well. Furthermore, to demonstrate the effectiveness of MEIRL, a simulation called highway driving is constructed. The simulation results show that MEIRL is comparable with the state of art IRL algorithms in terms of generalization capability, but more efficient while the number of features is large.
{"title":"Nonlinear inverse reinforcement learning with mutual information and Gaussian process","authors":"De C. Li, Yu Q. He, Feng Fu","doi":"10.1109/ROBIO.2014.7090537","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090537","url":null,"abstract":"In this paper, a mutual information (MI) and Extreme Learning Machine (ELM) based inverse reinforcement learning (IRL) algorithm, which termed as MEIRL, is proposed to construct nonlinear reward function. The basic idea of MIIRL is that, similar to GPIRL, the reward function is learned by using Gaussian process and the importance of each feature is obtained by using automatic relevance determination (ARD). Then mutual information is employed to evaluate the impact of each feature to the reward function, based on which extreme learning machine is introduced along with an adaptive model construction procedure to choose the optimal subset of features and the performance of the original GPIRL algorithm is enhanced as well. Furthermore, to demonstrate the effectiveness of MEIRL, a simulation called highway driving is constructed. The simulation results show that MEIRL is comparable with the state of art IRL algorithms in terms of generalization capability, but more efficient while the number of features is large.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114290651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090749
Caixia Liu, Ying Huang, Ping Liu, Yugang Zhang, Haitao Yuan, Leiming Li, Y. Ge
In this paper, a flexible tension-pressure tactile sensitive sensor array is proposed for intelligent robot skin to detect external pressure in the case of tensile and bendability condition. Such flexible sensitive tension-pressure tactile sensor is fabricated with carbon nanotubes (CNTs) and carbon black (CB) as conductive filler and silicone rubber as insulating matrix. Studying the extension-pressure resistance characteristic upon CNTs and CB as conductive filler to silicon rubber sensitive unit, the following conclusions are drawn: the CNTs-CB forms a solid conductive network structure “botryoid” in the system of conductive rubber filled with CNTs and CB, and the resistance of conductive rubber filled with CNTs and CB is quite sensitive with good linearity in the case where in tension and pressure process with filler mass fraction as 4% and the mass ratio of CNTs and CB as 2:3. In such kind of tension-pressure tactile sensor array based on conductive rubber filled by CNTs and CB, elastic electrodes overcome the disadvantage of the non-tension traditional sensor, therefore achieving the flexibility of sensor array. Such sensor array can detect pressure and tension in syncronization, and can be applied to robot flexible skin joints.
{"title":"A flexible tension-pressure tactile sensitive sensor array for the robot skin","authors":"Caixia Liu, Ying Huang, Ping Liu, Yugang Zhang, Haitao Yuan, Leiming Li, Y. Ge","doi":"10.1109/ROBIO.2014.7090749","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090749","url":null,"abstract":"In this paper, a flexible tension-pressure tactile sensitive sensor array is proposed for intelligent robot skin to detect external pressure in the case of tensile and bendability condition. Such flexible sensitive tension-pressure tactile sensor is fabricated with carbon nanotubes (CNTs) and carbon black (CB) as conductive filler and silicone rubber as insulating matrix. Studying the extension-pressure resistance characteristic upon CNTs and CB as conductive filler to silicon rubber sensitive unit, the following conclusions are drawn: the CNTs-CB forms a solid conductive network structure “botryoid” in the system of conductive rubber filled with CNTs and CB, and the resistance of conductive rubber filled with CNTs and CB is quite sensitive with good linearity in the case where in tension and pressure process with filler mass fraction as 4% and the mass ratio of CNTs and CB as 2:3. In such kind of tension-pressure tactile sensor array based on conductive rubber filled by CNTs and CB, elastic electrodes overcome the disadvantage of the non-tension traditional sensor, therefore achieving the flexibility of sensor array. Such sensor array can detect pressure and tension in syncronization, and can be applied to robot flexible skin joints.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114496488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper intends to create a simulation of manipulator and illustrates the methods of how to implement robot control in a short time. Here we complete the grasp and place mission using Gazebo virtual world and Robot Operating System (ROS). ROS is a distributed framework that is widely used in robotics. Considering the advantages of its easier hardware abstraction and code reuse, ROS was chosen to rapidly organize task architecture and, due to its compatibility with ROS, Gazebo was chosen as the main platform to simulate the designated motion of virtual manipulator.
{"title":"Manipulation task simulation using ROS and Gazebo","authors":"Wei Qian, Zeyang Xia, Jing Xiong, Yangzhou Gan, Yangchao Guo, Shaokui Weng, Hao Deng, Ying Hu, Jianwei Zhang","doi":"10.1109/ROBIO.2014.7090732","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090732","url":null,"abstract":"This paper intends to create a simulation of manipulator and illustrates the methods of how to implement robot control in a short time. Here we complete the grasp and place mission using Gazebo virtual world and Robot Operating System (ROS). ROS is a distributed framework that is widely used in robotics. Considering the advantages of its easier hardware abstraction and code reuse, ROS was chosen to rapidly organize task architecture and, due to its compatibility with ROS, Gazebo was chosen as the main platform to simulate the designated motion of virtual manipulator.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128975384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090500
Liu Yang, Wang Zhong-li, Cai Bai-gen
In traffic video surveillance system, target-level tracking and feature-level tracking are two important areas for research. Therefore, the combination between them is an interesting question. Mean-shift is a traditional target-level tracking algorithm with no adaptation to vehicle scale and orientation change. In order to solve the problem, algorithm combine SURF (speed-up robust feature) feature with Mean-shift algorithm is proposed in this article. Feature point scale and orientation information is used to make algorithm with scale and orientation adaptability. The tracking model of the vehicle is also updated in the algorithm. Experimental results show that the proposed algorithm provides better tracking result than traditional algorithm of vehicle scale and orientation change. Furthermore, the tracking result is also more accurate.
{"title":"An intelligent vehicle tracking technology based on SURF feature and Mean-shift algorithm","authors":"Liu Yang, Wang Zhong-li, Cai Bai-gen","doi":"10.1109/ROBIO.2014.7090500","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090500","url":null,"abstract":"In traffic video surveillance system, target-level tracking and feature-level tracking are two important areas for research. Therefore, the combination between them is an interesting question. Mean-shift is a traditional target-level tracking algorithm with no adaptation to vehicle scale and orientation change. In order to solve the problem, algorithm combine SURF (speed-up robust feature) feature with Mean-shift algorithm is proposed in this article. Feature point scale and orientation information is used to make algorithm with scale and orientation adaptability. The tracking model of the vehicle is also updated in the algorithm. Experimental results show that the proposed algorithm provides better tracking result than traditional algorithm of vehicle scale and orientation change. Furthermore, the tracking result is also more accurate.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"7 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130493739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090469
Wang Huan, Huiwen Guo, Xinyu Wu
Most existing methods for abnormal event detection in the literature are relied on a training phase. Different from conventional approaches for abnormal event detection, a saliency attention based abnormal event detection approach is proposed in this paper. It is inspired by the visual attention mechanism that abnormal events are those which attract attention mostly in videos. The temporal and spatial abnormal saliency maps are firstly constructed and then the final abnormal event map is formatted by fusing them using a method with dynamic coefficients. The temporal abnormal saliency map is constructed by motion contrast between keypoints extracted from two successive video frames. The spatial abnormal saliency map is structured based on the color contrasts. Experiments performed on the benchmark datasets show that the proposed method achieves a high accurate and robust results for abnormal event detection without a training phase.
{"title":"Saliency attention based abnormal event detection in video","authors":"Wang Huan, Huiwen Guo, Xinyu Wu","doi":"10.1109/ROBIO.2014.7090469","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090469","url":null,"abstract":"Most existing methods for abnormal event detection in the literature are relied on a training phase. Different from conventional approaches for abnormal event detection, a saliency attention based abnormal event detection approach is proposed in this paper. It is inspired by the visual attention mechanism that abnormal events are those which attract attention mostly in videos. The temporal and spatial abnormal saliency maps are firstly constructed and then the final abnormal event map is formatted by fusing them using a method with dynamic coefficients. The temporal abnormal saliency map is constructed by motion contrast between keypoints extracted from two successive video frames. The spatial abnormal saliency map is structured based on the color contrasts. Experiments performed on the benchmark datasets show that the proposed method achieves a high accurate and robust results for abnormal event detection without a training phase.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126836501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090587
Yosuke Nihei, H. Matsuda, Yuuichi Taduke, S. Kudoh, T. Suehiro
This paper describes the development of a handbell-playing robot system. The conditions under which the bells are rung are derived by an analysis of a physical model of a handbell. We designed the robot hardware based on these conditions. We also designed the mapping of the MIDI commands to the handbell-robot motions. Experiments performed verified that the robot system could play music according to MIDI scores.
{"title":"Development of a handbell-performance robot","authors":"Yosuke Nihei, H. Matsuda, Yuuichi Taduke, S. Kudoh, T. Suehiro","doi":"10.1109/ROBIO.2014.7090587","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090587","url":null,"abstract":"This paper describes the development of a handbell-playing robot system. The conditions under which the bells are rung are derived by an analysis of a physical model of a handbell. We designed the robot hardware based on these conditions. We also designed the mapping of the MIDI commands to the handbell-robot motions. Experiments performed verified that the robot system could play music according to MIDI scores.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"41 S1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114094964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090628
Jun Chen, Yongjiu Liu, Qingyi Gu, T. Aoyama, T. Takaki, I. Ishii
A high-frame-rate (HFR) structured light vision is developed for observing moving three-dimensional (3-D) scenes; it is mountable on the end of a robot manipulator for 3-D shape inspection. Our system can simultaneously obtain depth images of 512×512 pixels at 500 fps by implementing a motion-compensated coded structured light method on an HFR camera-projector platform; the 3-D computation is accelerated using the parallel processing on a GPU board. This method can remarkably reduce the synchronization errors in the structured-light-based measurement, which are encountered in the projection of multiple light patterns with different timings; such synchronization errors become larger as the ego-motion of a manipulator becomes larger. We demonstrate the performance of our system by showing several 3-D shape measurement results when the 3-D module is mounted on a fast-moving 6-DOF manipulator as a sensing head.
{"title":"Robot-mounted 500-fps 3-D shape measurement using motion-compensated coded structured light method","authors":"Jun Chen, Yongjiu Liu, Qingyi Gu, T. Aoyama, T. Takaki, I. Ishii","doi":"10.1109/ROBIO.2014.7090628","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090628","url":null,"abstract":"A high-frame-rate (HFR) structured light vision is developed for observing moving three-dimensional (3-D) scenes; it is mountable on the end of a robot manipulator for 3-D shape inspection. Our system can simultaneously obtain depth images of 512×512 pixels at 500 fps by implementing a motion-compensated coded structured light method on an HFR camera-projector platform; the 3-D computation is accelerated using the parallel processing on a GPU board. This method can remarkably reduce the synchronization errors in the structured-light-based measurement, which are encountered in the projection of multiple light patterns with different timings; such synchronization errors become larger as the ego-motion of a manipulator becomes larger. We demonstrate the performance of our system by showing several 3-D shape measurement results when the 3-D module is mounted on a fast-moving 6-DOF manipulator as a sensing head.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114352074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090634
Chunchao Chen, Jinsong Li, Jun Luo, Shaorong Xie, Huayan Pu, Ze Cui, J. Gu
To analyze kinematic characteristics of the 5-DOF nuclear robot, the forward kinematics equations are established through Denavit-Hartenberg (D-H) method. The working space of robot is drawn out in Matlab according to Monte Carlo method and the inverse kinematic equations are established through the Paul's inverse transforms. In view of the missing solutions and redundant solutions that may appear in the process of solution for inverse kinematics equations, the paper describes different treatments. In order to test the kinematic model of manipulator, the test procedures are designed and there is a simulation for door-open planning based on the forward and inverse kinematics in a multibody dynamics simulation software Recurdyn to monitor the motion of the manipulator. The simulation experiments in this paper verify the rationality of motion algorithm and link design parameters, and provide a reliable basis for the study of dynamics, control and planning of robot.
{"title":"Analysis and simulation of kinematics of 5-DOF nuclear power station robot manipulator","authors":"Chunchao Chen, Jinsong Li, Jun Luo, Shaorong Xie, Huayan Pu, Ze Cui, J. Gu","doi":"10.1109/ROBIO.2014.7090634","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090634","url":null,"abstract":"To analyze kinematic characteristics of the 5-DOF nuclear robot, the forward kinematics equations are established through Denavit-Hartenberg (D-H) method. The working space of robot is drawn out in Matlab according to Monte Carlo method and the inverse kinematic equations are established through the Paul's inverse transforms. In view of the missing solutions and redundant solutions that may appear in the process of solution for inverse kinematics equations, the paper describes different treatments. In order to test the kinematic model of manipulator, the test procedures are designed and there is a simulation for door-open planning based on the forward and inverse kinematics in a multibody dynamics simulation software Recurdyn to monitor the motion of the manipulator. The simulation experiments in this paper verify the rationality of motion algorithm and link design parameters, and provide a reliable basis for the study of dynamics, control and planning of robot.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121547142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090499
Yao Deng, Huawei Liang, Zhiling Wang, Junjie Huang
Driving assistance system has a significant influence on driving safety, and we introduce an integrated Forward Collision Warning (FCW) system based on monocular vision. In order to reduce the searching region of original image, lane making is presented to establish the ROI firstly. Secondly, hypotheses are extracted using Haar-like feature and Adaboost classifier. To remove false positive detection in the hypothesis verification process, we utilize SVM-based classifier with HOG feature lastly. Using Time-to-collision (TTC), possible collisions trigger the warning, and such Forward Collision Warning(FCW) system has been evaluated in dynamic environment. Experimental results show that the proposed system is robust and useful in practical applications.
{"title":"An integrated forward collision warning system based on monocular vision","authors":"Yao Deng, Huawei Liang, Zhiling Wang, Junjie Huang","doi":"10.1109/ROBIO.2014.7090499","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090499","url":null,"abstract":"Driving assistance system has a significant influence on driving safety, and we introduce an integrated Forward Collision Warning (FCW) system based on monocular vision. In order to reduce the searching region of original image, lane making is presented to establish the ROI firstly. Secondly, hypotheses are extracted using Haar-like feature and Adaboost classifier. To remove false positive detection in the hypothesis verification process, we utilize SVM-based classifier with HOG feature lastly. Using Time-to-collision (TTC), possible collisions trigger the warning, and such Forward Collision Warning(FCW) system has been evaluated in dynamic environment. Experimental results show that the proposed system is robust and useful in practical applications.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114703781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ROBIO.2014.7090691
Zuhui Jiang, Baolin Feng, Shibiao Zhao, Zhongshan Zheng, Lu Li, Kexuan Li
This article proposes a new design of a 3D unfolded/tightened mechanism based on the spine structure of golden tree snake, which can realize torsion and bending. Inspired by golden tree snake vertebrae space deformation characteristics, analyzing the anatomy characteristics of the golden tree snake spine bone and muscle distribution characteristics, The bionic mechanism and drive unit can drive the 3D unfolded/tightened mechanism and meet the demand of unfolded/tightened mechanism pose and adjustment, compared with the traditional unfolded/tightened mechanism. In this paper, dynamics model of this robot established by Lagrange method is represented.
{"title":"Unfolded/tightened mechanism series support design based on characteristics of flying snake spine","authors":"Zuhui Jiang, Baolin Feng, Shibiao Zhao, Zhongshan Zheng, Lu Li, Kexuan Li","doi":"10.1109/ROBIO.2014.7090691","DOIUrl":"https://doi.org/10.1109/ROBIO.2014.7090691","url":null,"abstract":"This article proposes a new design of a 3D unfolded/tightened mechanism based on the spine structure of golden tree snake, which can realize torsion and bending. Inspired by golden tree snake vertebrae space deformation characteristics, analyzing the anatomy characteristics of the golden tree snake spine bone and muscle distribution characteristics, The bionic mechanism and drive unit can drive the 3D unfolded/tightened mechanism and meet the demand of unfolded/tightened mechanism pose and adjustment, compared with the traditional unfolded/tightened mechanism. In this paper, dynamics model of this robot established by Lagrange method is represented.","PeriodicalId":289829,"journal":{"name":"2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114819029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}