Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555794
Beomyeong Park, Myeong-Ju Kim, E. Sung, Junhyung Kim, Jaeheung Park
A long stride can enable a humanoid robot achieve fast and stable walking. For a long stride, the kinematics of the robot should be fully utilized, and walking with pelvic rotation can be a solution. A rotational trajectory of pelvis considering kinematic limitations is needed for pelvis-rotation walking. When the robot walks with a long stride while rotating the pelvis, the yaw momentum may be larger than that when walks with the pelvis fixed. This is caused by the rotation of the pelvis and leg motion, and hence, walking with pelvic rotation may become unstable. In this paper, we propose to control the lower body of a robot as a redundant system with leg joints and a waist joint for walking with pelvic rotation. The position of the base frame to implement the redundant system for the lower body of the robot is also proposed. In addition, the a quadratic programming (QP) controller is formulated to enable arm swing for yaw momentum compensation while controlling the lower body. The feasibility of the proposed control method was verified using a simulation and an experiment of walking with a long stride while rotating the pelvis using a QP controller and compensating the yaw momentum by means of arm swing.
{"title":"Whole-body walking pattern using pelvis-rotation for long stride and arm swing for yaw angular momentum compensation","authors":"Beomyeong Park, Myeong-Ju Kim, E. Sung, Junhyung Kim, Jaeheung Park","doi":"10.1109/HUMANOIDS47582.2021.9555794","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555794","url":null,"abstract":"A long stride can enable a humanoid robot achieve fast and stable walking. For a long stride, the kinematics of the robot should be fully utilized, and walking with pelvic rotation can be a solution. A rotational trajectory of pelvis considering kinematic limitations is needed for pelvis-rotation walking. When the robot walks with a long stride while rotating the pelvis, the yaw momentum may be larger than that when walks with the pelvis fixed. This is caused by the rotation of the pelvis and leg motion, and hence, walking with pelvic rotation may become unstable. In this paper, we propose to control the lower body of a robot as a redundant system with leg joints and a waist joint for walking with pelvic rotation. The position of the base frame to implement the redundant system for the lower body of the robot is also proposed. In addition, the a quadratic programming (QP) controller is formulated to enable arm swing for yaw momentum compensation while controlling the lower body. The feasibility of the proposed control method was verified using a simulation and an experiment of walking with a long stride while rotating the pelvis using a QP controller and compensating the yaw momentum by means of arm swing.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126945446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555683
Fabio Fusaro, Edoardo Lamon, E. Momi, A. Ajoudani
This paper proposes a novel human-aware method that generates robot plans for autonomous and human-robot cooperative tasks in industrial environments. We modify the standard Behavior Trees (BTs) formulation in order to take into account the action-related costs, and design suitable metrics and cost functions to account for the cooperation with a worker considering human availability, decisions, and ergonomics. The developed approach allows the robot to online adapt its plan to the human partner, by choosing the tasks that minimize the execution cost(s). Through simulations, we first tuned the weights of the cost function for a realistic scenario. Subsequently, the developed method is validated through a proof-of-concept experiment representing the boxing of 4 different objects. The results show that the proposed cost-based BTs, along with the defined costs, enable the robot to online react and plan new tasks according to the dynamic changes of the environment, in terms of human presence and intentions. Our results indicate that the proposed solution demonstrates high potential in increasing robot reactivity and flexibility while, at the same time, in optimizing the decision-making process according to human actions.
{"title":"A Human-Aware Method to Plan Complex Cooperative and Autonomous Tasks using Behavior Trees","authors":"Fabio Fusaro, Edoardo Lamon, E. Momi, A. Ajoudani","doi":"10.1109/HUMANOIDS47582.2021.9555683","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555683","url":null,"abstract":"This paper proposes a novel human-aware method that generates robot plans for autonomous and human-robot cooperative tasks in industrial environments. We modify the standard Behavior Trees (BTs) formulation in order to take into account the action-related costs, and design suitable metrics and cost functions to account for the cooperation with a worker considering human availability, decisions, and ergonomics. The developed approach allows the robot to online adapt its plan to the human partner, by choosing the tasks that minimize the execution cost(s). Through simulations, we first tuned the weights of the cost function for a realistic scenario. Subsequently, the developed method is validated through a proof-of-concept experiment representing the boxing of 4 different objects. The results show that the proposed cost-based BTs, along with the defined costs, enable the robot to online react and plan new tasks according to the dynamic changes of the environment, in terms of human presence and intentions. Our results indicate that the proposed solution demonstrates high potential in increasing robot reactivity and flexibility while, at the same time, in optimizing the decision-making process according to human actions.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130418137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555675
Y. Tazaki
This study proposes a real-time footstep and timing adaptation mechanism for humanoid robots that can be integrated into a conventional walking pattern generator and increase the robustness of walking against disturbances. In order to meet the strict real-time constraint of humanoid robot control, the proposed method computes viable capture basins in the design phase. This pre-computed data can be used at runtime to modify the foot placement, the timing of landing, and the center-of-mass movement in response to applied disturbances with small computation cost. The performance of the proposed method is evaluated in simulation experiments.
{"title":"Footstep and Timing Adaptation for Humanoid Robots Utilizing Pre-computation of Capture Regions","authors":"Y. Tazaki","doi":"10.1109/HUMANOIDS47582.2021.9555675","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555675","url":null,"abstract":"This study proposes a real-time footstep and timing adaptation mechanism for humanoid robots that can be integrated into a conventional walking pattern generator and increase the robustness of walking against disturbances. In order to meet the strict real-time constraint of humanoid robot control, the proposed method computes viable capture basins in the design phase. This pre-computed data can be used at runtime to modify the foot placement, the timing of landing, and the center-of-mass movement in response to applied disturbances with small computation cost. The performance of the proposed method is evaluated in simulation experiments.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134569779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555789
Vijeth Rai, David Boe, E. Rombokas
Transitioning from one activity to another is one of the key challenges of prosthetic control. Vision sensors provide a glance into the environment’s desired and future movements, unlike body sensors (EMG, mechanical). This could be employed to anticipate and trigger transitions in prosthesis to provide a smooth user experience. A significant bottleneck in using vision sensors has been the acquisition of large labeled training data. Labeling the terrain in thousands of images is labor-intensive; it would be ideal to simply collect visual data for long periods without needing to label each frame. Toward that goal, we apply an unsupervised learning method to generate mode labels for kinematic gait cycles in training data. We use these labels with images from the same training data to train a vision classifier. The classifier predicts the target mode an average of 2.2 seconds before the kinematic changes. We report 96.6% overall and 99.5% steady-state mode classification accuracy. These results are comparable to studies using manually labeled data. This method, however, has the potential to dramatically scale without requiring additional labeling.
{"title":"Vision for Prosthesis Control Using Unsupervised Labeling of Training Data","authors":"Vijeth Rai, David Boe, E. Rombokas","doi":"10.1109/HUMANOIDS47582.2021.9555789","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555789","url":null,"abstract":"Transitioning from one activity to another is one of the key challenges of prosthetic control. Vision sensors provide a glance into the environment’s desired and future movements, unlike body sensors (EMG, mechanical). This could be employed to anticipate and trigger transitions in prosthesis to provide a smooth user experience. A significant bottleneck in using vision sensors has been the acquisition of large labeled training data. Labeling the terrain in thousands of images is labor-intensive; it would be ideal to simply collect visual data for long periods without needing to label each frame. Toward that goal, we apply an unsupervised learning method to generate mode labels for kinematic gait cycles in training data. We use these labels with images from the same training data to train a vision classifier. The classifier predicts the target mode an average of 2.2 seconds before the kinematic changes. We report 96.6% overall and 99.5% steady-state mode classification accuracy. These results are comparable to studies using manually labeled data. This method, however, has the potential to dramatically scale without requiring additional labeling.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130987339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555677
Felix Franzel, Thomas Eiband, Dongheui Lee
This work introduces a contact event pipeline to distinguish task-contact from Human-Robot interaction and collision during task execution. The increasing need for close proximity physical human-robot interaction (pHRI) in the private, health and industrial sector demands for new safety solutions. One of the most important issues regarding safe collaboration is the robust recognition and classification of contacts between human and robot. A solution is designed, that enables simple task teaching and accurate contact monitoring during task execution. Besides an external force and torque sensor, only proprioceptive data is used for the contact evaluation. An approach based on demonstrated task knowledge and the offset resulting from human interaction is designed to distinguish contact events from normal execution by a contact event detector. A contact type classifier implemented as Support Vector Machine is trained with the identified events. The system is set up to quickly identify contact incidents and enable appropriate robot reactions. An offline evaluation is conducted with data recorded from intended and unintended contacts as well as examples of task-contacts like object manipulation and environmental interactions. The system’s performance and its high responsiveness are evaluated in different experiments including a real world task.
{"title":"Detection of Collaboration and Collision Events during Contact Task Execution","authors":"Felix Franzel, Thomas Eiband, Dongheui Lee","doi":"10.1109/HUMANOIDS47582.2021.9555677","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555677","url":null,"abstract":"This work introduces a contact event pipeline to distinguish task-contact from Human-Robot interaction and collision during task execution. The increasing need for close proximity physical human-robot interaction (pHRI) in the private, health and industrial sector demands for new safety solutions. One of the most important issues regarding safe collaboration is the robust recognition and classification of contacts between human and robot. A solution is designed, that enables simple task teaching and accurate contact monitoring during task execution. Besides an external force and torque sensor, only proprioceptive data is used for the contact evaluation. An approach based on demonstrated task knowledge and the offset resulting from human interaction is designed to distinguish contact events from normal execution by a contact event detector. A contact type classifier implemented as Support Vector Machine is trained with the identified events. The system is set up to quickly identify contact incidents and enable appropriate robot reactions. An offline evaluation is conducted with data recorded from intended and unintended contacts as well as examples of task-contacts like object manipulation and environmental interactions. The system’s performance and its high responsiveness are evaluated in different experiments including a real world task.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117288574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555673
M. Eslamy, A. Schilling
In this work we analyzed the relationship between the shank and thigh angles (separately) and the gait cycle progression, to develop a novel approach for gait percent estimation. To do so, the angles were integrated. Our findings show that the integral of shank and thigh angle has a monotonic behavior and therefore can approximate the gait percents during a gait cycle through a one-to-one relationship. For all of the individuals, speeds and gaits a quasi-linear relationship was found between the shank and thigh angle integrals and the gait percents. Average $mathrm{R}^{2}$ values close to one and average RMS errors less than 2.2 were achieved. The proposed approach was investigated for different subjects (21 subjects), speeds (10 speeds) and gaits (walking and running) and can be potentially used for human motion analysis as well as for motion planning of assistive devices.
{"title":"Gait Percent Estimation during Walking and Running using Sagittal Shank or Thigh Angles","authors":"M. Eslamy, A. Schilling","doi":"10.1109/HUMANOIDS47582.2021.9555673","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555673","url":null,"abstract":"In this work we analyzed the relationship between the shank and thigh angles (separately) and the gait cycle progression, to develop a novel approach for gait percent estimation. To do so, the angles were integrated. Our findings show that the integral of shank and thigh angle has a monotonic behavior and therefore can approximate the gait percents during a gait cycle through a one-to-one relationship. For all of the individuals, speeds and gaits a quasi-linear relationship was found between the shank and thigh angle integrals and the gait percents. Average $mathrm{R}^{2}$ values close to one and average RMS errors less than 2.2 were achieved. The proposed approach was investigated for different subjects (21 subjects), speeds (10 speeds) and gaits (walking and running) and can be potentially used for human motion analysis as well as for motion planning of assistive devices.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555686
A. Yousefi-Koma, B. Maleki, Hessam Maleki, A. Amani, M. Bazrafshani, Hossein Keshavarz, Ala Iranmanesh, A. Yazdanpanah, H. Alai, Sahel Salehi, Mahyar Ashkvari, Milad Mousavi, M. Shafiee-Ashtiani
This paper describes the hardware, software framework, and experimental testing of SURENA IV humanoid robotics platform. SURENA IV has 43 degrees of freedom (DoFs), including seven DoFs for each arm, six DoFs for each hand, and six DoFs for each leg, with a height of 170 cm and a mass of 68 kg and morphological and mass properties similar to an average adult human. SURENA IV aims to realize a cost-effective and anthropomorphic humanoid robot for real-world scenarios. In this way, we demonstrate a locomotion framework based on a novel and inexpensive predictive foot sensor that enables walking with 7cm foot position error because of accumulative error of links and connections’ deflection(that has been manufactured by the tools which are available in the Universities). Thanks to this sensor, the robot can walk on unknown obstacles without any force feedback, by online adaptation of foot height and orientation. Moreover, the arm and hand of the robot have been designed to grasp the objects with different stiffness and geometries that enable the robot to do drilling, visual servoing of a moving object, and writing his name on the white-board.
{"title":"SURENAIV: Towards A Cost-effective Full-size Humanoid Robot for Real-world Scenarios","authors":"A. Yousefi-Koma, B. Maleki, Hessam Maleki, A. Amani, M. Bazrafshani, Hossein Keshavarz, Ala Iranmanesh, A. Yazdanpanah, H. Alai, Sahel Salehi, Mahyar Ashkvari, Milad Mousavi, M. Shafiee-Ashtiani","doi":"10.1109/HUMANOIDS47582.2021.9555686","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555686","url":null,"abstract":"This paper describes the hardware, software framework, and experimental testing of SURENA IV humanoid robotics platform. SURENA IV has 43 degrees of freedom (DoFs), including seven DoFs for each arm, six DoFs for each hand, and six DoFs for each leg, with a height of 170 cm and a mass of 68 kg and morphological and mass properties similar to an average adult human. SURENA IV aims to realize a cost-effective and anthropomorphic humanoid robot for real-world scenarios. In this way, we demonstrate a locomotion framework based on a novel and inexpensive predictive foot sensor that enables walking with 7cm foot position error because of accumulative error of links and connections’ deflection(that has been manufactured by the tools which are available in the Universities). Thanks to this sensor, the robot can walk on unknown obstacles without any force feedback, by online adaptation of foot height and orientation. Moreover, the arm and hand of the robot have been designed to grasp the objects with different stiffness and geometries that enable the robot to do drilling, visual servoing of a moving object, and writing his name on the white-board.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133296046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555779
Wesley P. Chan, T. Tran, Sara Sheikholeslami, E. Croft
The Minimum Jerk motion model has long been cited in literature for human point-to-point reaching motions in single-person tasks. While it has been demonstrated that applying minimum-jerk-like trajectories to robot reaching motions in the joint action task of human-robot handovers allows a robot giver to be perceived as more careful, safe, and skilled, it has not been verified whether human reaching motions in handovers follow the Minimum Jerk model. To experimentally test and verify motion models for human reaches in handovers, we examined human reaching motions in unconstrained handovers (where the person is allowed to move their whole body) and fitted against 1) the Minimum Jerk model, 2) its variation, the Decoupled Minimum Jerk model, and 3) the recently proposed Elliptical (Conic) model. Results showed that Conic model fits unconstrained human handover reaching motions best. Furthermore, we discovered that unlike constrained, single-person reaching motions, which have been found to be elliptical, there is a split between elliptical and hyperbolic conic types. We expect our results will help guide generation of more humanlike reaching motions for human-robot handover tasks.
{"title":"An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers","authors":"Wesley P. Chan, T. Tran, Sara Sheikholeslami, E. Croft","doi":"10.1109/HUMANOIDS47582.2021.9555779","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555779","url":null,"abstract":"The Minimum Jerk motion model has long been cited in literature for human point-to-point reaching motions in single-person tasks. While it has been demonstrated that applying minimum-jerk-like trajectories to robot reaching motions in the joint action task of human-robot handovers allows a robot giver to be perceived as more careful, safe, and skilled, it has not been verified whether human reaching motions in handovers follow the Minimum Jerk model. To experimentally test and verify motion models for human reaches in handovers, we examined human reaching motions in unconstrained handovers (where the person is allowed to move their whole body) and fitted against 1) the Minimum Jerk model, 2) its variation, the Decoupled Minimum Jerk model, and 3) the recently proposed Elliptical (Conic) model. Results showed that Conic model fits unconstrained human handover reaching motions best. Furthermore, we discovered that unlike constrained, single-person reaching motions, which have been found to be elliptical, there is a split between elliptical and hyperbolic conic types. We expect our results will help guide generation of more humanlike reaching motions for human-robot handover tasks.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121440173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555788
F. Krebs, Andre Meixner, Isabel Patzer, T. Asfour
Learning models of bimanual manipulation tasks from human demonstration requires capturing human body and hand motions, as well as the objects involved in the demonstration, to provide all the information needed for learning manipulation task models on symbolic and subsymbolic level. We provide a new multi-modal dataset of bimanual manipulation actions consisting of accurate human whole-body motion data, full configuration of both hands, and the 6D pose and trajectories of all objects involved in the task. The data is collected using five different sensor systems: a motion capture system, two data gloves, three RGB-D cameras, a headmounted egocentric camera and three inertial measurement units (IMUs). The dataset includes 12 actions of bimanual daily household activities performed by two healthy subjects with a large number of intra-action variations and three repetitions of each action variation, resulting in 588 recorded demonstrations. A total of 21 household items are used to perform the various actions. In addition to the data collection, we developed tools and methods for the standardized representation and organization of multi-modal sensor data in large-scale human motion databases. We extended our Master Motor Map (MMM) framework to allow the mapping of collected demonstrations to a reference model of the human body as well as the segmentation and annotation of recorded manipulation tasks. The dataset includes raw sensor data, normalized data in the MMM format and annotations, and is made publicly available in the KIT Whole-Body Human Motion Database.
从人体演示中学习双手操作任务模型需要捕获人体和手部动作以及演示中涉及的对象,以提供在符号和亚符号层面学习操作任务模型所需的所有信息。我们提供了一个新的双手操作动作的多模态数据集,包括准确的人体全身运动数据,双手的完整配置,以及任务中所有物体的6D姿态和轨迹。数据收集使用五种不同的传感器系统:一个运动捕捉系统,两个数据手套,三个RGB-D摄像头,一个头戴式自我中心摄像头和三个惯性测量单元(imu)。该数据集包括由两名健康受试者进行的12个手工日常家庭活动动作,其中有大量的动作内变化,每个动作变化重复3次,共记录了588次演示。总共有21个家庭物品被用来执行各种动作。除了数据收集之外,我们还开发了用于大规模人体运动数据库中多模态传感器数据的标准化表示和组织的工具和方法。我们扩展了Master Motor Map (MMM)框架,允许将收集到的演示映射到人体的参考模型,以及对记录的操作任务进行分割和注释。该数据集包括原始传感器数据、规范化的MMM格式数据和注释,并在KIT全身人体运动数据库中公开提供。
{"title":"The KIT Bimanual Manipulation Dataset","authors":"F. Krebs, Andre Meixner, Isabel Patzer, T. Asfour","doi":"10.1109/HUMANOIDS47582.2021.9555788","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555788","url":null,"abstract":"Learning models of bimanual manipulation tasks from human demonstration requires capturing human body and hand motions, as well as the objects involved in the demonstration, to provide all the information needed for learning manipulation task models on symbolic and subsymbolic level. We provide a new multi-modal dataset of bimanual manipulation actions consisting of accurate human whole-body motion data, full configuration of both hands, and the 6D pose and trajectories of all objects involved in the task. The data is collected using five different sensor systems: a motion capture system, two data gloves, three RGB-D cameras, a headmounted egocentric camera and three inertial measurement units (IMUs). The dataset includes 12 actions of bimanual daily household activities performed by two healthy subjects with a large number of intra-action variations and three repetitions of each action variation, resulting in 588 recorded demonstrations. A total of 21 household items are used to perform the various actions. In addition to the data collection, we developed tools and methods for the standardized representation and organization of multi-modal sensor data in large-scale human motion databases. We extended our Master Motor Map (MMM) framework to allow the mapping of collected demonstrations to a reference model of the human body as well as the segmentation and annotation of recorded manipulation tasks. The dataset includes raw sensor data, normalized data in the MMM format and annotations, and is made publicly available in the KIT Whole-Body Human Motion Database.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127578835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555806
Lukas Rustler, Bohumila Potočná, Michal Polic, K. Štěpánová, M. Hoffmann
Robots were largely missing the sense of touch for decades. As artificial sensitive skins covering large areas of robot bodies are starting to appear, to be useful to the machines, sensor positions on the robot body are needed. In this work, a Nao humanoid robot was retrofitted with pressure-sensitive skin on the head, torso, and arms. We experimentally compare the accuracy and effort associated with the following skin spatial calibration approaches and their combinations: (i) combining CAD models and skin layout in 2D, (ii) 3D reconstruction from images, (iii) using robot kinematics to calibrate skin by self-contact. To acquire 3D positions of taxels on individual skin parts, methods (i) and (ii) were similarly laborious but 3D reconstruction was more accurate. To align these 3D point clouds with the robot kinematics, two variants of self-contact were employed: skin-on-skin and utilization of a custom end effector (finger). In combination with the 3D reconstruction data, mean calibration errors below the radius of individual sensors were achieved (2 mm). Significant perturbation of more than 100 torso taxel positions could be corrected using self-contact calibration, reaching approx. 3 mm mean error. This work is not a proof of concept but deployment of the approaches at scale: the outcome is actual spatial calibration of all 970 taxels on the robot body. As the different calibration approaches are evaluated in isolation as well as in different combinations, this work provides a guideline applicable to spatial calibration of different sensor arrays.
{"title":"Spatial calibration of whole-body artificial skin on a humanoid robot: comparing self-contact, 3D reconstruction, and CAD-based calibration","authors":"Lukas Rustler, Bohumila Potočná, Michal Polic, K. Štěpánová, M. Hoffmann","doi":"10.1109/HUMANOIDS47582.2021.9555806","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555806","url":null,"abstract":"Robots were largely missing the sense of touch for decades. As artificial sensitive skins covering large areas of robot bodies are starting to appear, to be useful to the machines, sensor positions on the robot body are needed. In this work, a Nao humanoid robot was retrofitted with pressure-sensitive skin on the head, torso, and arms. We experimentally compare the accuracy and effort associated with the following skin spatial calibration approaches and their combinations: (i) combining CAD models and skin layout in 2D, (ii) 3D reconstruction from images, (iii) using robot kinematics to calibrate skin by self-contact. To acquire 3D positions of taxels on individual skin parts, methods (i) and (ii) were similarly laborious but 3D reconstruction was more accurate. To align these 3D point clouds with the robot kinematics, two variants of self-contact were employed: skin-on-skin and utilization of a custom end effector (finger). In combination with the 3D reconstruction data, mean calibration errors below the radius of individual sensors were achieved (2 mm). Significant perturbation of more than 100 torso taxel positions could be corrected using self-contact calibration, reaching approx. 3 mm mean error. This work is not a proof of concept but deployment of the approaches at scale: the outcome is actual spatial calibration of all 970 taxels on the robot body. As the different calibration approaches are evaluated in isolation as well as in different combinations, this work provides a guideline applicable to spatial calibration of different sensor arrays.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125826209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}