Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035030
D. Pavlichenko, Diego Rodriguez, Christian Lenz, Max Schwarz, Sven Behnke
In human-made scenarios, robots need to be able to fully operate objects in their surroundings, i.e., objects are required to be functionally grasped rather than only picked. This imposes very strict constraints on the object pose such that a direct grasp can be performed. Inspired by the anthropomorphic nature of humanoid robots, we propose an approach that first grasps an object with one hand, obtaining full control over its pose, and performs the functional grasp with the second hand subsequently. Thus, we develop a fully autonomous pipeline for dual-arm functional regrasping of novel familiar objects, i.e., objects never seen before that belong to a known object category, e.g., spray bottles. This process involves semantic segmentation, object pose estimation, non-rigid mesh registration, grasp sampling, handover pose generation and in-hand pose refinement. The latter is used to compensate for the unpredictable object movement during the first grasp. The approach is applied to a human-like upper body. To the best knowledge of the authors, this is the first system that exhibits autonomous bimanual functional regrasping capabilities. We demonstrate that our system yields reliable success rates and can be applied on-line to real-world tasks using only one off-the-shelf RGB-D sensor.
{"title":"Autonomous Bimanual Functional Regrasping of Novel Object Class Instances","authors":"D. Pavlichenko, Diego Rodriguez, Christian Lenz, Max Schwarz, Sven Behnke","doi":"10.1109/Humanoids43949.2019.9035030","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035030","url":null,"abstract":"In human-made scenarios, robots need to be able to fully operate objects in their surroundings, i.e., objects are required to be functionally grasped rather than only picked. This imposes very strict constraints on the object pose such that a direct grasp can be performed. Inspired by the anthropomorphic nature of humanoid robots, we propose an approach that first grasps an object with one hand, obtaining full control over its pose, and performs the functional grasp with the second hand subsequently. Thus, we develop a fully autonomous pipeline for dual-arm functional regrasping of novel familiar objects, i.e., objects never seen before that belong to a known object category, e.g., spray bottles. This process involves semantic segmentation, object pose estimation, non-rigid mesh registration, grasp sampling, handover pose generation and in-hand pose refinement. The latter is used to compensate for the unpredictable object movement during the first grasp. The approach is applied to a human-like upper body. To the best knowledge of the authors, this is the first system that exhibits autonomous bimanual functional regrasping capabilities. We demonstrate that our system yields reliable success rates and can be applied on-line to real-world tasks using only one off-the-shelf RGB-D sensor.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116245265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035052
Mihael Simonič, L. Žlajpah, A. Ude, B. Nemec
An assembly task is in many cases just a reverse execution of the corresponding disassembly task. During the assembly, the object being assembled passes consecutively from state to state until completed, and the set of possible movements becomes more and more constrained. Based on the observation that autonomous learning of physically constrained tasks can be advantageous, we use information obtained during learning of disassembly in assembly. For autonomous learning of a disassembly policy we propose to use hierarchical reinforcement learning, where learning is decomposed into a high-level decision-making and underlying lower-level intelligent compliant controller, which exploits the natural motion in a constrained environment. During the reverse execution of disassembly policy, the motion is further optimized by means of an iterative learning controller. The proposed approach was verified on two challenging tasks - a maze learning problem and autonomous learning of inserting a car bulb into the casing.
{"title":"Autonomous Learning of Assembly Tasks from the Corresponding Disassembly Tasks","authors":"Mihael Simonič, L. Žlajpah, A. Ude, B. Nemec","doi":"10.1109/Humanoids43949.2019.9035052","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035052","url":null,"abstract":"An assembly task is in many cases just a reverse execution of the corresponding disassembly task. During the assembly, the object being assembled passes consecutively from state to state until completed, and the set of possible movements becomes more and more constrained. Based on the observation that autonomous learning of physically constrained tasks can be advantageous, we use information obtained during learning of disassembly in assembly. For autonomous learning of a disassembly policy we propose to use hierarchical reinforcement learning, where learning is decomposed into a high-level decision-making and underlying lower-level intelligent compliant controller, which exploits the natural motion in a constrained environment. During the reverse execution of disassembly policy, the motion is further optimized by means of an iterative learning controller. The proposed approach was verified on two challenging tasks - a maze learning problem and autonomous learning of inserting a car bulb into the casing.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132600029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035058
S. Cruciani, Kaiyu Hang, Christian Smith, D. Kragic
In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object's shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object's pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.
{"title":"Dual-Arm In-Hand Manipulation Using Visual Feedback","authors":"S. Cruciani, Kaiyu Hang, Christian Smith, D. Kragic","doi":"10.1109/Humanoids43949.2019.9035058","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035058","url":null,"abstract":"In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object's shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object's pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132151535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035009
Vivekanandan Suryamurthy, V. S. Raghavan, Arturo Laurenzi, N. Tsagarakis, D. Kanoulas
Robots operating in real world environments require a high-level perceptual understanding of the chief physical properties of the terrain they are traversing. In unknown environments, roughness is one such important terrain property that could play a key role in devising robot control/planning strategies. In this paper, we present a fast method for predicting pixel-wise labels of terrain (stone, sand, road/sidewalk, wood, grass, metal) and roughness estimation, using a single RGB-based deep neural network. Real world RGB images are used to experimentally validate the presented approach. Furthermore, we demonstrate an application of our proposed method on the centaur-like wheeled-legged robot CENTAURO, by integrating it with a navigation planner that is capable of re-configuring the leg joints to modify the robot footprint polygon for stability purposes or for safe traversal among obstacles.
{"title":"Terrain Segmentation and Roughness Estimation using RGB Data: Path Planning Application on the CENTAURO Robot","authors":"Vivekanandan Suryamurthy, V. S. Raghavan, Arturo Laurenzi, N. Tsagarakis, D. Kanoulas","doi":"10.1109/Humanoids43949.2019.9035009","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035009","url":null,"abstract":"Robots operating in real world environments require a high-level perceptual understanding of the chief physical properties of the terrain they are traversing. In unknown environments, roughness is one such important terrain property that could play a key role in devising robot control/planning strategies. In this paper, we present a fast method for predicting pixel-wise labels of terrain (stone, sand, road/sidewalk, wood, grass, metal) and roughness estimation, using a single RGB-based deep neural network. Real world RGB images are used to experimentally validate the presented approach. Furthermore, we demonstrate an application of our proposed method on the centaur-like wheeled-legged robot CENTAURO, by integrating it with a navigation planner that is capable of re-configuring the leg joints to modify the robot footprint polygon for stability purposes or for safe traversal among obstacles.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133422600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035017
P. Ferrari, Marco Cognetti, G. Oriolo
We propose a sensor-based motion plan-ning/replanning method for a humanoid that must execute a task implicitly requiring locomotion. It is assumed that the environment is unknown and the robot is equipped with a depth sensor. The proposed approach hinges upon three modules that run concurrently: mapping, planning and execution. The mapping module is in charge of incrementally building a 3D environment map during the robot motion, based on the information provided by the depth sensor. The planning module computes future motions of the humanoid, taking into account the geometry of both the environment and the robot. To this end, it uses a 2-stages local motion planner consisting in a randomized CoM movement primitives-based algorithm that allows on-line replanning. Previously planned motions are performed through the execution module. The proposed approach is validated through simulations in V-REP for the humanoid robot NAO.
{"title":"Sensor-based Whole-Body Planning/Replanning for Humanoid Robots","authors":"P. Ferrari, Marco Cognetti, G. Oriolo","doi":"10.1109/Humanoids43949.2019.9035017","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035017","url":null,"abstract":"We propose a sensor-based motion plan-ning/replanning method for a humanoid that must execute a task implicitly requiring locomotion. It is assumed that the environment is unknown and the robot is equipped with a depth sensor. The proposed approach hinges upon three modules that run concurrently: mapping, planning and execution. The mapping module is in charge of incrementally building a 3D environment map during the robot motion, based on the information provided by the depth sensor. The planning module computes future motions of the humanoid, taking into account the geometry of both the environment and the robot. To this end, it uses a 2-stages local motion planner consisting in a randomized CoM movement primitives-based algorithm that allows on-line replanning. Previously planned motions are performed through the execution module. The proposed approach is validated through simulations in V-REP for the humanoid robot NAO.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132593912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035018
Markus Grotz, D. Sippel, T. Asfour
Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.
{"title":"Active Vision for Extraction of Physically Plausible Support Relations","authors":"Markus Grotz, D. Sippel, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035018","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035018","url":null,"abstract":"Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133711209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9034999
Roberto Shut, R. Hollis
This paper presents a new 14-DoF dual manipulation system for the CMU ballbot. The result is a new type of robot that combines smooth omnidirectional motion with the capability to interact with objects and the environment through manipulation. The system includes a pair of 7-DoF arms. Each arm weighs 12.9 kg, with a reach of 0.815 m, and a maximum payload of 10 kg at full extension. The ballbot's arms have a larger payload-to-weight ratio than commercial cobot arms with similar or greater payload. Design features include highly integrated sensor-actuator-control units in each joint, lightweight exoskeleton structure, and anthropomorphic kinematics. The integration of the arms with the CMU ballbot is demonstrated through heavy payload carrying and balancing experiments.
{"title":"Development of a Humanoid Dual Arm System for a Single Spherical Wheeled Balancing Mobile Robot","authors":"Roberto Shut, R. Hollis","doi":"10.1109/Humanoids43949.2019.9034999","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9034999","url":null,"abstract":"This paper presents a new 14-DoF dual manipulation system for the CMU ballbot. The result is a new type of robot that combines smooth omnidirectional motion with the capability to interact with objects and the environment through manipulation. The system includes a pair of 7-DoF arms. Each arm weighs 12.9 kg, with a reach of 0.815 m, and a maximum payload of 10 kg at full extension. The ballbot's arms have a larger payload-to-weight ratio than commercial cobot arms with similar or greater payload. Design features include highly integrated sensor-actuator-control units in each joint, lightweight exoskeleton structure, and anthropomorphic kinematics. The integration of the arms with the CMU ballbot is demonstrated through heavy payload carrying and balancing experiments.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127739513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035033
Joshua A. Haustein, S. Cruciani, Rizwan Asif, Kaiyu Hang, D. Kragic
We address the problem of planning the placement of a grasped object with a robot manipulator. More specifically, the robot is tasked to place the grasped object such that a placement preference function is maximized. For this, we present an approach that uses in-hand manipulation to adjust the robot's initial grasp to extend the set of reachable placements. Given an initial grasp, the algorithm computes a set of grasps that can be reached by pushing and rotating the object in-hand. With this set of reachable grasps, it then searches for a stable placement that maximizes the preference function. If successful it returns a sequence of in-hand pushes to adjust the initial grasp to a more advantageous grasp together with a transport motion that carries the object to the placement. We evaluate our algorithm's performance on various placing scenarios, and observe its effectiveness also in challenging scenes containing many obstacles. Our experiments demonstrate that re-grasping with in-hand manipulation increases the quality of placements the robot can reach. In particular, it enables the algorithm to find solutions in situations where safe placing with the initial grasp wouldn't be possible.
{"title":"Placing Objects with prior In-Hand Manipulation using Dexterous Manipulation Graphs","authors":"Joshua A. Haustein, S. Cruciani, Rizwan Asif, Kaiyu Hang, D. Kragic","doi":"10.1109/Humanoids43949.2019.9035033","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035033","url":null,"abstract":"We address the problem of planning the placement of a grasped object with a robot manipulator. More specifically, the robot is tasked to place the grasped object such that a placement preference function is maximized. For this, we present an approach that uses in-hand manipulation to adjust the robot's initial grasp to extend the set of reachable placements. Given an initial grasp, the algorithm computes a set of grasps that can be reached by pushing and rotating the object in-hand. With this set of reachable grasps, it then searches for a stable placement that maximizes the preference function. If successful it returns a sequence of in-hand pushes to adjust the initial grasp to a more advantageous grasp together with a transport motion that carries the object to the placement. We evaluate our algorithm's performance on various placing scenarios, and observe its effectiveness also in challenging scenes containing many obstacles. Our experiments demonstrate that re-grasping with in-hand manipulation increases the quality of placements the robot can reach. In particular, it enables the algorithm to find solutions in situations where safe placing with the initial grasp wouldn't be possible.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129422746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035010
J. Garcia-Haro, Bernd Henze, George Mesesan, Santiago Martínez de la Casa Díaz, C. Ott
This work presents an extension of balance control for torque-controlled humanoid robots. Within a non-strict task hierarchy, the controller allows the robot to use the feet end-effectors to balance, while the remaining hand end-effectors can be used to perform Dual-Arm manipulation. The controller generates a passive and compliance behaviour to regulate the location of the centre of mass (CoM), the orientation of the hip and the poses of each end-effector assigned to the task of interaction (in this case bi-manipulation). Then, an appropriate wrench (force and torque) is applied to each of the end-effectors employed for the task to achieve this purpose. Now, in this new controller, the essential requirement focuses on the fact that the desired wrench in the CoM is computed through the sum of the balancing and bi-manipulation wrenches. The bi-manipulation wrenches are obtained through a new dynamic model that allows executing tasks of approaching the grip and manipulation of large objects compliantly. On the other hand, the feedback controller has been maintained but in combination with a bi-manipulation-oriented feedforward control to improve the performance in the object trajectory tracking. This controller is tested in different experiments with the robot TORO.
{"title":"Integration of Dual-Arm Manipulation in a Passivity Based Whole-Body Controller for Torque-Controlled Humanoid Robots","authors":"J. Garcia-Haro, Bernd Henze, George Mesesan, Santiago Martínez de la Casa Díaz, C. Ott","doi":"10.1109/Humanoids43949.2019.9035010","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035010","url":null,"abstract":"This work presents an extension of balance control for torque-controlled humanoid robots. Within a non-strict task hierarchy, the controller allows the robot to use the feet end-effectors to balance, while the remaining hand end-effectors can be used to perform Dual-Arm manipulation. The controller generates a passive and compliance behaviour to regulate the location of the centre of mass (CoM), the orientation of the hip and the poses of each end-effector assigned to the task of interaction (in this case bi-manipulation). Then, an appropriate wrench (force and torque) is applied to each of the end-effectors employed for the task to achieve this purpose. Now, in this new controller, the essential requirement focuses on the fact that the desired wrench in the CoM is computed through the sum of the balancing and bi-manipulation wrenches. The bi-manipulation wrenches are obtained through a new dynamic model that allows executing tasks of approaching the grip and manipulation of large objects compliantly. On the other hand, the feedback controller has been maintained but in combination with a bi-manipulation-oriented feedforward control to improve the performance in the object trajectory tracking. This controller is tested in different experiments with the robot TORO.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126702493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035060
Akshit Kaplish, K. Yamane
In this paper, we present motion retargeting and control algorithms for teleoperated physical human-robot interaction (pHRI). We employ unilateral teleoperation in which a sensor-equipped operator interacts with a static object such as a mannequin to provide the motion and force references. The controller takes the references as well as current robot states and contact forces as input, and outputs the joint torques to track the operator's contact forces while preserving the expression and style of the motion. We develop a hierarchical optimization scheme combined with a motion retargeting algorithm that resolves the discrepancy between the contact states of the operator and robot due to different kinematic parameters and body shapes. We demonstrate the controller performance on a dual-arm robot with soft skin and contact force sensors using pre-recorded human demonstrations of hugging.
{"title":"Motion Retargeting and Control for Teleoperated Physical Human-Robot Interaction","authors":"Akshit Kaplish, K. Yamane","doi":"10.1109/Humanoids43949.2019.9035060","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035060","url":null,"abstract":"In this paper, we present motion retargeting and control algorithms for teleoperated physical human-robot interaction (pHRI). We employ unilateral teleoperation in which a sensor-equipped operator interacts with a static object such as a mannequin to provide the motion and force references. The controller takes the references as well as current robot states and contact forces as input, and outputs the joint torques to track the operator's contact forces while preserving the expression and style of the motion. We develop a hierarchical optimization scheme combined with a motion retargeting algorithm that resolves the discrepancy between the contact states of the operator and robot due to different kinematic parameters and body shapes. We demonstrate the controller performance on a dual-arm robot with soft skin and contact force sensors using pre-recorded human demonstrations of hugging.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114725886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}