Pub Date : 2021-10-25DOI: 10.1109/ichr.2010.5686278
K. Kawamura, S. G. Lee, S. Sugano, A. U.S., K. Yokoi
{"title":"Organizing committe","authors":"K. Kawamura, S. G. Lee, S. Sugano, A. U.S., K. Yokoi","doi":"10.1109/ichr.2010.5686278","DOIUrl":"https://doi.org/10.1109/ichr.2010.5686278","url":null,"abstract":"","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124926334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-06DOI: 10.1109/ICHR.2010.5686827
S. Dalibard, A. Nakhaei, F. Lamiraux, J. Laumond
This paper deals with manipulation task planning for a humanoid robot while stepping. It introduces the concept of “documented” objects, i.e. objects that provide information on how to manipulate them. The planning phase is decoupled into two parts. First a random motion planner uses the documentation of the object to quickly plan a collision free motion for a simplified model of the robot manipulating the object. Then an inverse kinematics solver animates the whole set of the robot's degrees of freedom by converting the simplified path into time parametrized tasks. Several examples show the generalization of the method.
{"title":"Manipulation of documented objects by a walking humanoid robot","authors":"S. Dalibard, A. Nakhaei, F. Lamiraux, J. Laumond","doi":"10.1109/ICHR.2010.5686827","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686827","url":null,"abstract":"This paper deals with manipulation task planning for a humanoid robot while stepping. It introduces the concept of “documented” objects, i.e. objects that provide information on how to manipulate them. The planning phase is decoupled into two parts. First a random motion planner uses the documentation of the object to quickly plan a collision free motion for a simplified model of the robot manipulating the object. Then an inverse kinematics solver animates the whole set of the robot's degrees of freedom by converting the simplified path into time parametrized tasks. Several examples show the generalization of the method.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122556384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686298
Katharina Muelling, J. Kober, Jan Peters
Table tennis is a sufficiently complex motor task for studying complete skill learning systems. It consists of several elementary motions and requires fast movements, accurate control, and online adaptation. To represent the elementary movements needed for robot table tennis, we rely on dynamic systems motor primitives (DMP). While such DMPs have been successfully used for learning a variety of simple motor tasks, they only represent single elementary actions. In order to select and generalize among different striking movements, we present a new approach, called Mixture of Motor Primitives that uses a gating network to activate appropriate motor primitives. The resulting policy enables us to select among the appropriate motor primitives as well as to generalize between them. In order to obtain a fully learned robot table tennis setup, we also address the problem of predicting the necessary context information, i.e., the hitting point in time and space where we want to hit the ball. We show that the resulting setup was capable of playing rudimentary table tennis using an anthropomorphic robot arm.
{"title":"Learning table tennis with a Mixture of Motor Primitives","authors":"Katharina Muelling, J. Kober, Jan Peters","doi":"10.1109/ICHR.2010.5686298","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686298","url":null,"abstract":"Table tennis is a sufficiently complex motor task for studying complete skill learning systems. It consists of several elementary motions and requires fast movements, accurate control, and online adaptation. To represent the elementary movements needed for robot table tennis, we rely on dynamic systems motor primitives (DMP). While such DMPs have been successfully used for learning a variety of simple motor tasks, they only represent single elementary actions. In order to select and generalize among different striking movements, we present a new approach, called Mixture of Motor Primitives that uses a gating network to activate appropriate motor primitives. The resulting policy enables us to select among the appropriate motor primitives as well as to generalize between them. In order to obtain a fully learned robot table tennis setup, we also address the problem of predicting the necessary context information, i.e., the hitting point in time and space where we want to hit the ball. We show that the resulting setup was capable of playing rudimentary table tennis using an anthropomorphic robot arm.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117027931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686310
J. Laaksonen, V. Kyrki, D. Kragic
This paper addresses the problem of sensor-based grasping under uncertainty, specifically, the on-line estimation of grasp stability. We show that machine learning approaches can to some extent detect grasp stability from haptic pressure and finger joint information. Using data from both simulations and two real robotic hands, the paper compares different feature representations and machine learning methods to evaluate their performance in determining the grasp stability. A boosting classifier was found to perform the best of the methods tested.
{"title":"Evaluation of feature representation and machine learning methods in grasp stability learning","authors":"J. Laaksonen, V. Kyrki, D. Kragic","doi":"10.1109/ICHR.2010.5686310","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686310","url":null,"abstract":"This paper addresses the problem of sensor-based grasping under uncertainty, specifically, the on-line estimation of grasp stability. We show that machine learning approaches can to some extent detect grasp stability from haptic pressure and finger joint information. Using data from both simulations and two real robotic hands, the paper compares different feature representations and machine learning methods to evaluate their performance in determining the grasp stability. A boosting classifier was found to perform the best of the methods tested.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122717206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686330
Alyssa M. Batula, Youngmoo E. Kim
While there have been several robots designed to play musical instruments, most are specialized (non-humanoid) designs that do not attempt to model a human performer's specific gestures and movements. Musical instruments are, of course, designed to be played by humans, and the process of developing humanoid performers offers the potential of revealing new insights into the control of instruments in a musically expressive manner. This paper describes our system that enables a small humanoid to play a music keyboard and also respond to acoustic input. The system uses a combination of pre-programmed positions, audio feedback, and self-calibration methods in order to perform simple two-finger piano pieces. As with human performers, control adjustments can be made in real-time when musical errors are detected during the course of a performance. This system is intended as a proof-of-concept and prototyping platform for humanoid music performance methods, which are ultimately destined for the Hubo adult-sized humanoid. The current system allows us to develop algorithms in a low-risk environment before moving to the much more capable, but costly, Hubo platform. In developing and analyzing humanoid performance gestures, this effort represents a step towards a better understanding of human musicianship, musical expression, and creativity, which may ultimately facilitate improved human-robot interaction.
{"title":"Development of a mini-humanoid pianist","authors":"Alyssa M. Batula, Youngmoo E. Kim","doi":"10.1109/ICHR.2010.5686330","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686330","url":null,"abstract":"While there have been several robots designed to play musical instruments, most are specialized (non-humanoid) designs that do not attempt to model a human performer's specific gestures and movements. Musical instruments are, of course, designed to be played by humans, and the process of developing humanoid performers offers the potential of revealing new insights into the control of instruments in a musically expressive manner. This paper describes our system that enables a small humanoid to play a music keyboard and also respond to acoustic input. The system uses a combination of pre-programmed positions, audio feedback, and self-calibration methods in order to perform simple two-finger piano pieces. As with human performers, control adjustments can be made in real-time when musical errors are detected during the course of a performance. This system is intended as a proof-of-concept and prototyping platform for humanoid music performance methods, which are ultimately destined for the Hubo adult-sized humanoid. The current system allows us to develop algorithms in a low-risk environment before moving to the much more capable, but costly, Hubo platform. In developing and analyzing humanoid performance gestures, this effort represents a step towards a better understanding of human musicianship, musical expression, and creativity, which may ultimately facilitate improved human-robot interaction.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114383424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686276
Youngbum Jun, Robert Ellenberg, P. Oh
Many walking pattern generators for humanoid robots require predefined trajectories for the robot to track. This inflexibility limits the range of real-world environments that the robot can navigate through. For environments with obstacles and inconsistent terrain, the ability to change the walking trajectory becomes valuable. Using a miniature humanoid, a three-dimensional inverted pendulum model and ZMP preview control with ZMP and Foot generator were used to implement a real-time ZMP preview controller. We show the simulation results walking on obstacle field which validates that this approach can generate the all types of walking pattern based on the distance estimated based on the sensor data to the target to step without the predefined trajectory.
{"title":"Realization of miniature humanoid for obstacle avoidance with real-time ZMP preview control used for full-sized humanoid","authors":"Youngbum Jun, Robert Ellenberg, P. Oh","doi":"10.1109/ICHR.2010.5686276","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686276","url":null,"abstract":"Many walking pattern generators for humanoid robots require predefined trajectories for the robot to track. This inflexibility limits the range of real-world environments that the robot can navigate through. For environments with obstacles and inconsistent terrain, the ability to change the walking trajectory becomes valuable. Using a miniature humanoid, a three-dimensional inverted pendulum model and ZMP preview control with ZMP and Foot generator were used to implement a real-time ZMP preview controller. We show the simulation results walking on obstacle field which validates that this approach can generate the all types of walking pattern based on the distance estimated based on the sensor data to the target to step without the predefined trajectory.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129104857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686341
Nico Blodow, Dominik Jain, Zoltán-Csaba Márton, M. Beetz
Knowing precisely where objects are located enables a robot to perform its tasks both more efficiently and more reliably. To acquire the respective knowledge and to effectively use it as a resource, a robot has to go through the world with “open eyes”. Specifically, it has to become environment-aware by keeping track of where objects of interest are located and explicitly represent their geometrical properties. In this paper, we propose to equip robots with a perception system that passively monitors the environment using a 3D data acquisition system, identifying objects that might become the subject of future manipulation tasks. Our system encompasses a 3D semantic mapping and reconstruction pipeline and a storage and data merging unit for perceived information that provides on-demand modeling and comparison capabilities. Based on probabilistic logical models, we address the important perceptual subtask of object identity resolution, i.e. inferring which observations refer to which entities in the real world (perceptual anchoring). Our system can be used as a bootstrapping system for the generation of object-centric knowledge and can, in this way, be used as a mid-level perception system that enables activity recognition, scene recognition and high-level planning.
{"title":"Perception and probabilistic anchoring for dynamic world state logging","authors":"Nico Blodow, Dominik Jain, Zoltán-Csaba Márton, M. Beetz","doi":"10.1109/ICHR.2010.5686341","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686341","url":null,"abstract":"Knowing precisely where objects are located enables a robot to perform its tasks both more efficiently and more reliably. To acquire the respective knowledge and to effectively use it as a resource, a robot has to go through the world with “open eyes”. Specifically, it has to become environment-aware by keeping track of where objects of interest are located and explicitly represent their geometrical properties. In this paper, we propose to equip robots with a perception system that passively monitors the environment using a 3D data acquisition system, identifying objects that might become the subject of future manipulation tasks. Our system encompasses a 3D semantic mapping and reconstruction pipeline and a storage and data merging unit for perceived information that provides on-demand modeling and comparison capabilities. Based on probabilistic logical models, we address the important perceptual subtask of object identity resolution, i.e. inferring which observations refer to which entities in the real world (perceptual anchoring). Our system can be used as a bootstrapping system for the generation of object-centric knowledge and can, in this way, be used as a mid-level perception system that enables activity recognition, scene recognition and high-level planning.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"PP 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126535783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686837
G. Kootstra, N. Bergström, D. Kragic
This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach named Automatic Detection And Segmentation (ADAS). For the detection of objects, we use symmetry, one of the Gestalt principles for figure-ground segregation to detect salient objects in a scene. From the initial seed, the object is segmented by iteratively applying graph cuts. We base the segmentation on both 2D and 3D cues: color, depth, and plane information. Instead of using a standard grid-based representation of the image, we use super pixels. Besides being a more natural representation, the use of super pixels greatly improves the processing time of the graph cuts, and provides more noise-robust color and depth information. The results show that both the object-detection as well as the object-segmentation method are successful and outperform existing methods.
{"title":"Fast and Automatic Detection and Segmentation of unknown objects","authors":"G. Kootstra, N. Bergström, D. Kragic","doi":"10.1109/ICHR.2010.5686837","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686837","url":null,"abstract":"This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach named Automatic Detection And Segmentation (ADAS). For the detection of objects, we use symmetry, one of the Gestalt principles for figure-ground segregation to detect salient objects in a scene. From the initial seed, the object is segmented by iteratively applying graph cuts. We base the segmentation on both 2D and 3D cues: color, depth, and plane information. Instead of using a standard grid-based representation of the image, we use super pixels. Besides being a more natural representation, the use of super pixels greatly improves the processing time of the graph cuts, and provides more noise-robust color and depth information. The results show that both the object-detection as well as the object-segmentation method are successful and outperform existing methods.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"12 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125758492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686338
T. Buschmann, S. Lohmeier, Markus Schwienbacher, V. Favot, H. Ulbrich, F. V. Hundelshausen, Gerhard Rohe, Hans-Joachim Wünsche
We present a new approach to autonomous navigation for humanoid robots in previously unknown environments. Our method uses only on-board cameras and does not rely on color coding, textured surfaces or predefined object models. We propose using a reactive approach towards navigation that does not require building an explicit model of the environment. The basic idea of the navigation system is to check a set of precalculated feasible trajectories (called tentacles) for passability instead of searching for a path within a map. The computer vision system was combined with our real-time trajectory planning and control, enabling our robot LOLA to safely navigate in previously unknown environments.
{"title":"Walking in unknown environments — A step towards more autonomy","authors":"T. Buschmann, S. Lohmeier, Markus Schwienbacher, V. Favot, H. Ulbrich, F. V. Hundelshausen, Gerhard Rohe, Hans-Joachim Wünsche","doi":"10.1109/ICHR.2010.5686338","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686338","url":null,"abstract":"We present a new approach to autonomous navigation for humanoid robots in previously unknown environments. Our method uses only on-board cameras and does not rely on color coding, textured surfaces or predefined object models. We propose using a reactive approach towards navigation that does not require building an explicit model of the environment. The basic idea of the navigation system is to check a set of precalculated feasible trajectories (called tentacles) for passability instead of searching for a path within a map. The computer vision system was combined with our real-time trajectory planning and control, enabling our robot LOLA to safely navigate in previously unknown environments.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133525128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/ICHR.2010.5686325
Robert Ellenberg, Robert M. Sherbert, P. Oh, A. Alspach, R. Gross, Jun-Ho Oh
Humanoid robotics development often depends on simulation and prototypes for new walking algorithms. The advantages of simulation such as low cost and risk make repeated experiments and development more straightforward. However, real world implementation can require extensive porting to move from a simulation environment to a hardware platform. This tedious task can be a huge time sink that drastically slows development cycles. This paper describes a way to eliminate this bottleneck. By developing standardized protocols for motors and sensors, a software controller can communicate with both platforms using the same interface.
{"title":"A common interface for humanoid simulation and hardware","authors":"Robert Ellenberg, Robert M. Sherbert, P. Oh, A. Alspach, R. Gross, Jun-Ho Oh","doi":"10.1109/ICHR.2010.5686325","DOIUrl":"https://doi.org/10.1109/ICHR.2010.5686325","url":null,"abstract":"Humanoid robotics development often depends on simulation and prototypes for new walking algorithms. The advantages of simulation such as low cost and risk make repeated experiments and development more straightforward. However, real world implementation can require extensive porting to move from a simulation environment to a hardware platform. This tedious task can be a huge time sink that drastically slows development cycles. This paper describes a way to eliminate this bottleneck. By developing standardized protocols for motors and sensors, a software controller can communicate with both platforms using the same interface.","PeriodicalId":243505,"journal":{"name":"2010 10th IEEE-RAS International Conference on Humanoid Robots","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115342018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}