Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035006
Takayuki Murooka, Riku Shigematsu, Kunio Kojima, Fumihito Sugai, Youhei Kakiuchi, K. Okada, M. Inaba
Exerting large force is one of the difficult problems for a humanoid robot. In particular, the task which needs large force with a tool or the task whose reference force is unknown such as digging are more difficult. The task of digging was realized in the previous research, but with that method the robot cannot exert large force even though force is not enough for digging because the decision method of reference shovel force is only changing the direction of the current shovel force, and modification of the robot's CoG (center of gravity) is only used for balancing. In this paper, we proposed methods to determine the reference shovel force which is necessary enough to realize the task of digging, and generate feasible posture which exerts the reference shovel force within joint torque limits. To verify the methods, we conducted experiments of the task of digging using a life-size humanoid robot JAXON. JAXON succeeded digging with some soil from soft to hard.
{"title":"Whole-body Posture Generation by Adjusting Tool Force with CoG Movement: Application to Soil Digging","authors":"Takayuki Murooka, Riku Shigematsu, Kunio Kojima, Fumihito Sugai, Youhei Kakiuchi, K. Okada, M. Inaba","doi":"10.1109/Humanoids43949.2019.9035006","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035006","url":null,"abstract":"Exerting large force is one of the difficult problems for a humanoid robot. In particular, the task which needs large force with a tool or the task whose reference force is unknown such as digging are more difficult. The task of digging was realized in the previous research, but with that method the robot cannot exert large force even though force is not enough for digging because the decision method of reference shovel force is only changing the direction of the current shovel force, and modification of the robot's CoG (center of gravity) is only used for balancing. In this paper, we proposed methods to determine the reference shovel force which is necessary enough to realize the task of digging, and generate feasible posture which exerts the reference shovel force within joint torque limits. To verify the methods, we conducted experiments of the task of digging using a life-size humanoid robot JAXON. JAXON succeeded digging with some soil from soft to hard.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117112624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035024
Arul Selvam Periyasamy, Max Schwarz, Sven Behnke
Robotic systems often require precise scene analysis capabilities, especially in unstructured, cluttered situations, as occurring in human-made environments. While current deep-learning based methods yield good estimates of object poses, they often struggle with large amounts of occlusion and do not take inter-object effects into account. Vision as inverse graphics is a promising concept for detailed scene analysis. A key element for this idea is a method for inferring scene parameter updates from the rasterized 2D scene. However, the rasterization process is notoriously difficult to invert, both due to the projection and occlusion process, but also due to secondary effects such as lighting or reflections. We propose to remove the latter from the process by mapping the rasterized image into an abstract feature space learned in a self-supervised way from pixel correspondences. Using only a light-weight inverse rendering module, this allows us to refine 6D object pose estimations in highly cluttered scenes by optimizing a simple pixel-wise difference in the abstract image representation. We evaluate our approach on the challenging YCB-Video dataset, where it yields large improvements and demonstrates a large basin of attraction towards the correct object poses.
{"title":"Refining 6D Object Pose Predictions using Abstract Render-and-Compare","authors":"Arul Selvam Periyasamy, Max Schwarz, Sven Behnke","doi":"10.1109/Humanoids43949.2019.9035024","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035024","url":null,"abstract":"Robotic systems often require precise scene analysis capabilities, especially in unstructured, cluttered situations, as occurring in human-made environments. While current deep-learning based methods yield good estimates of object poses, they often struggle with large amounts of occlusion and do not take inter-object effects into account. Vision as inverse graphics is a promising concept for detailed scene analysis. A key element for this idea is a method for inferring scene parameter updates from the rasterized 2D scene. However, the rasterization process is notoriously difficult to invert, both due to the projection and occlusion process, but also due to secondary effects such as lighting or reflections. We propose to remove the latter from the process by mapping the rasterized image into an abstract feature space learned in a self-supervised way from pixel correspondences. Using only a light-weight inverse rendering module, this allows us to refine 6D object pose estimations in highly cluttered scenes by optimizing a simple pixel-wise difference in the abstract image representation. We evaluate our approach on the challenging YCB-Video dataset, where it yields large improvements and demonstrates a large basin of attraction towards the correct object poses.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"38 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120925535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035047
J. Starke, Konstantinos Chatzilygeroudis, A. Billard, T. Asfour
The human hand is a versatile and complex system with dexterous manipulation capabilities. For the transfer of human grasping capabilities to humanoid robotic and prosthetic hands, an understanding of the dynamic characteristics of grasp motions is fundamental. Although the analysis of grasp synergies, especially for kinematic hand postures, is a very active field of research, the description and transfer of grasp forces is still a challenging task. In this work, we introduce a novel representation of grasp synergies in the force space, socalled force synergies, which describe forces applied at contact locations in a low dimensional space and are inspired by the correlations between grasp forces in fingers and palm. To evaluate this novel representation, we conduct a human grasping study with eight subjects performing handover and tool use tasks on 14 objects with varying content and weight using 16 different grasp types. We capture contact forces at 18 locations within the hand together with the joint angle values of a data glove with 22 degrees of freedom. We identify correlations between contact forces and derive force synergies using dimensionality reduction techniques, which allow to represent grasp forces applied during grasping with only eight parameters.
{"title":"On Force Synergies in Human Grasping Behavior","authors":"J. Starke, Konstantinos Chatzilygeroudis, A. Billard, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035047","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035047","url":null,"abstract":"The human hand is a versatile and complex system with dexterous manipulation capabilities. For the transfer of human grasping capabilities to humanoid robotic and prosthetic hands, an understanding of the dynamic characteristics of grasp motions is fundamental. Although the analysis of grasp synergies, especially for kinematic hand postures, is a very active field of research, the description and transfer of grasp forces is still a challenging task. In this work, we introduce a novel representation of grasp synergies in the force space, socalled force synergies, which describe forces applied at contact locations in a low dimensional space and are inspired by the correlations between grasp forces in fingers and palm. To evaluate this novel representation, we conduct a human grasping study with eight subjects performing handover and tool use tasks on 14 objects with varying content and weight using 16 different grasp types. We capture contact forces at 18 locations within the hand together with the joint angle values of a data glove with 22 degrees of freedom. We identify correlations between contact forces and derive force synergies using dimensionality reduction techniques, which allow to represent grasp forces applied during grasping with only eight parameters.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123751315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035011
Ioanna Mitsioni, Y. Karayiannidis, J. A. Stork, D. Kragic
Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We build upon earlier work limited to torque-controlled robots and redefine it for velocity controlled ones. We incorporate force/torque sensor measurements, reformulate and further extend the control problem formulation. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.
{"title":"Data-Driven Model Predictive Control for the Contact-Rich Task of Food Cutting","authors":"Ioanna Mitsioni, Y. Karayiannidis, J. A. Stork, D. Kragic","doi":"10.1109/Humanoids43949.2019.9035011","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035011","url":null,"abstract":"Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We build upon earlier work limited to torque-controlled robots and redefine it for velocity controlled ones. We incorporate force/torque sensor measurements, reformulate and further extend the control problem formulation. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126941485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035064
Matthias Hirschmanner, Christiana Tsiourti, T. Patten, M. Vincze
Teleoperation of robots with traditional input devices (joysticks, keyboard, etc.) is often difficult and cumbersome especially for novice users. We introduce an intuitive virtual reality (VR) based teleoperation system for humanoid robots that imitates the user's upper body pose. We present an algorithm to directly calculate the robot's joint angles from the teleoperator's arm poses using the Leap Motion Controller and a comfortable VR environment for visual feedback. The intuitiveness of the system is tested with 21 novice users performing two object manipulation tasks and compared with kinesthetic guidance which is a popular alternative to teleoperation for Learning from Demonstration (LfD). The majority of the users preferred our teleoperation system overall for both tasks, stating it was easier to learn. Users also showed objective performance improvement for one task in particular, exhibiting lower task duration. A video of the working system can be found at http://hirschmanner.com/teleoperation.
{"title":"Virtual Reality Teleoperation of a Humanoid Robot Using Markerless Human Upper Body Pose Imitation","authors":"Matthias Hirschmanner, Christiana Tsiourti, T. Patten, M. Vincze","doi":"10.1109/Humanoids43949.2019.9035064","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035064","url":null,"abstract":"Teleoperation of robots with traditional input devices (joysticks, keyboard, etc.) is often difficult and cumbersome especially for novice users. We introduce an intuitive virtual reality (VR) based teleoperation system for humanoid robots that imitates the user's upper body pose. We present an algorithm to directly calculate the robot's joint angles from the teleoperator's arm poses using the Leap Motion Controller and a comfortable VR environment for visual feedback. The intuitiveness of the system is tested with 21 novice users performing two object manipulation tasks and compared with kinesthetic guidance which is a popular alternative to teleoperation for Learning from Demonstration (LfD). The majority of the users preferred our teleoperation system overall for both tasks, stating it was easier to learn. Users also showed objective performance improvement for one task in particular, exhibiting lower task duration. A video of the working system can be found at http://hirschmanner.com/teleoperation.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035055
Andre Meixner, Christopher Hazard, N. Pollard
In spite of substantial progress, robust and dexterous in-hand manipulation remains a robotics grand challenge. Recent research has shown that optimization of robot hand morphology for specific tasks can result in custom hand designs that are low-cost, easy to maintain, and highly capable. However, the resulting manipulation strategies may not be very robust or generalizable in real-world situations. This paper shows that robustness can be improved dramatically by optimizing controls instead of contact force / trajectories and by considering uncertainty explicitly during the optimization process. We present a evolutionary algorithm based pipeline for co-optimizing hand morphology and control strategy over families of problems and initial states in order to achieve robust in-hand manipulation. We demonstrate that this approach produces robust results which utilize all surfaces of the hand and surprising dynamic motions. We showcase the advantage of optimizing joint limit values to create robust designs. Furthermore, we demonstrate that our approach is complementary to trajectory optimization based approaches and can be utilized to improve robustness of such results as well as to create custom hand designs from scratch. Results are shown for repositioning and reorienting diverse objects relative to the palm of the hand.
{"title":"Automated Design of Simple and Robust Manipulators for Dexterous In-Hand Manipulation Tasks using Evolutionary Strategies","authors":"Andre Meixner, Christopher Hazard, N. Pollard","doi":"10.1109/Humanoids43949.2019.9035055","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035055","url":null,"abstract":"In spite of substantial progress, robust and dexterous in-hand manipulation remains a robotics grand challenge. Recent research has shown that optimization of robot hand morphology for specific tasks can result in custom hand designs that are low-cost, easy to maintain, and highly capable. However, the resulting manipulation strategies may not be very robust or generalizable in real-world situations. This paper shows that robustness can be improved dramatically by optimizing controls instead of contact force / trajectories and by considering uncertainty explicitly during the optimization process. We present a evolutionary algorithm based pipeline for co-optimizing hand morphology and control strategy over families of problems and initial states in order to achieve robust in-hand manipulation. We demonstrate that this approach produces robust results which utilize all surfaces of the hand and surprising dynamic motions. We showcase the advantage of optimizing joint limit values to create robust designs. Furthermore, we demonstrate that our approach is complementary to trajectory optimization based approaches and can be utilized to improve robustness of such results as well as to create custom hand designs from scratch. Results are shown for repositioning and reorienting diverse objects relative to the palm of the hand.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129106173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035037
George P. Kontoudis, Minas Liarokapis, K. Vamvoudakis
This paper presents an adaptive robot hand that is capable of performing selective interdigitation, robust grasping, and dexterous, in-hand manipulation. The design consists of underactuated, compliant, anthropomorphic robot fingers that are implemented with flexure joints based on elastomer materials (urethane rubber). The metacarpophalangeal (MCP) joint of each finger can achieve both flexion/extension and abduction/adduction. The use of differential mechanisms simplifies the actuation scheme, as we utilize only two actuators for four fingers, achieving affordable dexterity. The two actuators offer increased power transmission during the execution of grasping and manipulation tasks. The importance of the thumb is highlighted with the use of two individual tendon-routing systems for its control. An analytical model is employed to derive the rotational stiffness of the finger flexure joints and select appropriate actuators. Selective interdigitation allows the robot hand to switch from pinch grasp configurations to power grasp configurations optimizing the performance of the device for specific objects. The design can be fabricated with off-the-shelf materials and rapid prototyping techniques, while its efficiency has been validated using an extensive set of experimental paradigms that involved the execution of complex tasks with everyday life objects.
{"title":"An Adaptive, Humanlike Robot Hand with Selective Interdigitation: Towards Robust Grasping and Dexterous, In-Hand Manipulation","authors":"George P. Kontoudis, Minas Liarokapis, K. Vamvoudakis","doi":"10.1109/Humanoids43949.2019.9035037","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035037","url":null,"abstract":"This paper presents an adaptive robot hand that is capable of performing selective interdigitation, robust grasping, and dexterous, in-hand manipulation. The design consists of underactuated, compliant, anthropomorphic robot fingers that are implemented with flexure joints based on elastomer materials (urethane rubber). The metacarpophalangeal (MCP) joint of each finger can achieve both flexion/extension and abduction/adduction. The use of differential mechanisms simplifies the actuation scheme, as we utilize only two actuators for four fingers, achieving affordable dexterity. The two actuators offer increased power transmission during the execution of grasping and manipulation tasks. The importance of the thumb is highlighted with the use of two individual tendon-routing systems for its control. An analytical model is employed to derive the rotational stiffness of the finger flexure joints and select appropriate actuators. Selective interdigitation allows the robot hand to switch from pinch grasp configurations to power grasp configurations optimizing the performance of the device for specific objects. The design can be fabricated with off-the-shelf materials and rapid prototyping techniques, while its efficiency has been validated using an extensive set of experimental paradigms that involved the execution of complex tasks with everyday life objects.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130620078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035029
Zhenting Wang, K. Harada, Weiwei Wan
We propose a method for checking the multicontact stability of humanoids simultaneously using Zero Moment Point (ZMP) and Contact Wrench Cone (CWC). The main idea of our method is to derive the friction constraints of foot contact using soft-finger contact model. Thanks to the similar definition of ZMP and the soft-finger contact model, it is able to use the friction ellipsoid computed from the soft-finger contact model at ZMP to replace the 6 dimensional wrench that computed from the friction constraints at each contact point of the foot contact. By using our proposed method, the stability of the foot rotation can be judged by using the ZMP while other factors affecting the stability of a robot can be judged by the CWC. By combining two wrench spaces, our method can be used to check the stability of humanoids which walking on the horizontal floor with hand contact with the environment.
{"title":"Multi-contact Stability of Humanoids using ZMP and CWC","authors":"Zhenting Wang, K. Harada, Weiwei Wan","doi":"10.1109/Humanoids43949.2019.9035029","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035029","url":null,"abstract":"We propose a method for checking the multicontact stability of humanoids simultaneously using Zero Moment Point (ZMP) and Contact Wrench Cone (CWC). The main idea of our method is to derive the friction constraints of foot contact using soft-finger contact model. Thanks to the similar definition of ZMP and the soft-finger contact model, it is able to use the friction ellipsoid computed from the soft-finger contact model at ZMP to replace the 6 dimensional wrench that computed from the friction constraints at each contact point of the foot contact. By using our proposed method, the stability of the foot rotation can be judged by using the ZMP while other factors affecting the stability of a robot can be judged by the CWC. By combining two wrench spaces, our method can be used to check the stability of humanoids which walking on the horizontal floor with hand contact with the environment.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124181113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035077
S. Kajita, M. Benallegue, Rafael Cisneros Limón, T. Sakaguchi, M. Morisawa, H. Kaminaga, Iori Kumagai, K. Kaneko, F. Kanehiro
This paper discusses a lateral balance controller for a biped robot with both legs fully extended. In a conventional position-controlled legged robot, a balance control with stretched knees is an open problem since the mechanical singularity prevents the direct control of the floor force distribution. To control forces indirectly, we introduce an additional acceleration of the center of mass and a ZMP modification as control inputs. The lateral balance controller is designed as a state feedback system by using a data driven approach. The proposed lateral controller was merged with a sagittal controller based on the Spatially Quantized Dynamics (SQD), then it helped our humanoid robot HRP-2Kai to achieve laterally well balanced, knee-stretched, and long stride gait.
{"title":"Position-Based Lateral Balance Control for Knee-Stretched Biped Robot","authors":"S. Kajita, M. Benallegue, Rafael Cisneros Limón, T. Sakaguchi, M. Morisawa, H. Kaminaga, Iori Kumagai, K. Kaneko, F. Kanehiro","doi":"10.1109/Humanoids43949.2019.9035077","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035077","url":null,"abstract":"This paper discusses a lateral balance controller for a biped robot with both legs fully extended. In a conventional position-controlled legged robot, a balance control with stretched knees is an open problem since the mechanical singularity prevents the direct control of the floor force distribution. To control forces indirectly, we introduce an additional acceleration of the center of mass and a ZMP modification as control inputs. The lateral balance controller is designed as a state feedback system by using a data driven approach. The proposed lateral controller was merged with a sagittal controller based on the Spatially Quantized Dynamics (SQD), then it helped our humanoid robot HRP-2Kai to achieve laterally well balanced, knee-stretched, and long stride gait.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114223210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035040
Kento Nakayama, Weiwei Wan, K. Harada
This paper aims to provide a method for automatically designing a set of grasping tools performing a sequence of robotic assembly tasks. First, the convex shape decomposition is applied to extract the grasped part of an object. From the shape information of the part, we determine the number of fingers of the grasping tool as well as the stroke and dimension of each finger. Next, the detailed shape of finger surface such as the slant angle and the curvature radius is determined by applying the plane clustering to the surface of the grasped part. We consider reducing the number of grasping tools used in a whole sequence of assembly by checking if a same grasping tool can be commonly used between two individual assembly tasks. Finally, the proposed method was verified through a series of robotic assembly experiments.
{"title":"Designing Grasping Tools for Robotic Assembly Based on Shape Analysis of Parts","authors":"Kento Nakayama, Weiwei Wan, K. Harada","doi":"10.1109/Humanoids43949.2019.9035040","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035040","url":null,"abstract":"This paper aims to provide a method for automatically designing a set of grasping tools performing a sequence of robotic assembly tasks. First, the convex shape decomposition is applied to extract the grasped part of an object. From the shape information of the part, we determine the number of fingers of the grasping tool as well as the stroke and dimension of each finger. Next, the detailed shape of finger surface such as the slant angle and the curvature radius is determined by applying the plane clustering to the surface of the grasped part. We consider reducing the number of grasping tools used in a whole sequence of assembly by checking if a same grasping tool can be commonly used between two individual assembly tasks. Finally, the proposed method was verified through a series of robotic assembly experiments.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126444110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}