Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045666
M. Strobel, Jörg Illmann, B. Kluge, F. Marrone
The use of human gestures for interacting with robot systems in domestic environments is investigated. Special attention is paid to the recognition of the user's intent behind a gestural action. The main advantage of our approach is that the human's movement together with valuable information extracted from a spatial scene representation are directly considered while trying to uncover the intention behind a human's gesture. To uncover the intention of dynamic human gestures we use continuous density hidden Markov models. As an application example, instructing a domestic service robot is considered. An event driven control architecture permits easy context switching and meets the demands of an interactive robot assistant.
{"title":"Using spatial context knowledge in gesture recognition for commanding a domestic service robot","authors":"M. Strobel, Jörg Illmann, B. Kluge, F. Marrone","doi":"10.1109/ROMAN.2002.1045666","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045666","url":null,"abstract":"The use of human gestures for interacting with robot systems in domestic environments is investigated. Special attention is paid to the recognition of the user's intent behind a gestural action. The main advantage of our approach is that the human's movement together with valuable information extracted from a spatial scene representation are directly considered while trying to uncover the intention behind a human's gesture. To uncover the intention of dynamic human gestures we use continuous density hidden Markov models. As an application example, instructing a domestic service robot is considered. An event driven control architecture permits easy context switching and meets the demands of an interactive robot assistant.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125081768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045608
J. Lorenzo, M. Hernandez
This paper presents the addition to the Stanley's (1976) habituation model of a previous stage based on spectrogram to detect temporal patterns in a signal and to obtain a measure of habituation to these patterns. With this addition we achieve a habituation scheme that saturates as the temporal pattern is perceived by the system and drops when the pattern changes. The detection of these temporal patterns is simplified by the use of the spectrogram which allows to use simple techniques for the detection like a lineal predictive filter. We have also realized some experiments with sequences of colour images in an office environment in order to test the performance of the proposed method with real images.
{"title":"A habituation mechanism for a perceptual user interface","authors":"J. Lorenzo, M. Hernandez","doi":"10.1109/ROMAN.2002.1045608","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045608","url":null,"abstract":"This paper presents the addition to the Stanley's (1976) habituation model of a previous stage based on spectrogram to detect temporal patterns in a signal and to obtain a measure of habituation to these patterns. With this addition we achieve a habituation scheme that saturates as the temporal pattern is perceived by the system and drops when the pattern changes. The detection of these temporal patterns is simplified by the use of the spectrogram which allows to use simple techniques for the detection like a lineal predictive filter. We have also realized some experiments with sequences of colour images in an office environment in order to test the performance of the proposed method with real images.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045669
T. Osone, J. Tatsuno, T. Nishida, H. Kobayashi
Human friendly robots should behave like human beings. This paper proposes a cost function, which generates human like movements when robots execute dual arm cooperative tasks. By considering human behavior, we established a cost function based on two factors: "second derivatives of joint torques" and "degree of visibility". As a result of several simulations, we concluded that the proposed cost function generated human like movements, which was closer to real human behavior compared with conventional cost functions.
{"title":"Cooperative motion planning for dual arm robot to demonstrate human arm movements","authors":"T. Osone, J. Tatsuno, T. Nishida, H. Kobayashi","doi":"10.1109/ROMAN.2002.1045669","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045669","url":null,"abstract":"Human friendly robots should behave like human beings. This paper proposes a cost function, which generates human like movements when robots execute dual arm cooperative tasks. By considering human behavior, we established a cost function based on two factors: \"second derivatives of joint torques\" and \"degree of visibility\". As a result of several simulations, we concluded that the proposed cost function generated human like movements, which was closer to real human behavior compared with conventional cost functions.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122842956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045668
R. Bischoff, Arif Kazi, Markus Seyfarth
Icon-based programming paves the way for making programming of modern industrial robots simpler and more intuitive. The flowchart-like representation of the program structure provides a superior overview. Programming becomes possible without detailed prior knowledge of a syntax. The style guide presented in this paper suggests a number of manufacturer-independent design rules for intuitive icon-based programming interfaces operated via touch screen and speech input. The complete style guide can be obtained from the authors free of charge by e-mail.
{"title":"The MORPHA style guide for icon-based programming","authors":"R. Bischoff, Arif Kazi, Markus Seyfarth","doi":"10.1109/ROMAN.2002.1045668","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045668","url":null,"abstract":"Icon-based programming paves the way for making programming of modern industrial robots simpler and more intuitive. The flowchart-like representation of the program structure provides a superior overview. Programming becomes possible without detailed prior knowledge of a syntax. The style guide presented in this paper suggests a number of manufacturer-independent design rules for intuitive icon-based programming interfaces operated via touch screen and speech input. The complete style guide can be obtained from the authors free of charge by e-mail.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120914843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045601
Y. Kobayashi, Y. Nagata, Y. Miyake
Mutual interaction of rhythm is widely observed in the music performance between humans. In this study, based on our previous research, we used mutual entrainment model in coupled phase oscillators. We try to construct an ensemble support system to realize such mutual interaction process. As a result, by comparing with direct ensemble between two humans, effectiveness of our new support system is suggested.
{"title":"Development of new ensemble system based on coupled phase oscillator","authors":"Y. Kobayashi, Y. Nagata, Y. Miyake","doi":"10.1109/ROMAN.2002.1045601","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045601","url":null,"abstract":"Mutual interaction of rhythm is widely observed in the music performance between humans. In this study, based on our previous research, we used mutual entrainment model in coupled phase oscillators. We try to construct an ensemble support system to realize such mutual interaction process. As a result, by comparing with direct ensemble between two humans, effectiveness of our new support system is suggested.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114561346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045618
H. Takeda, K. Terada, T. Kawamura
In this paper, we propose a new concept for intelligence called artifact intelligence that can be another approach to realize intelligent robots. Artifact intelligence means intelligence for artifacts that fits its embodiment, i.e., structures and functions of artifacts. Artifact intelligence differs from natural intelligence in terms of intentionality and automated objects in autonomy. In order to realize artifact intelligence, we investigate two types of affordance with the prototype robots, i.e., active affordance with the autonomous mobile chair and emergent affordance with "AgentBox".
{"title":"Artifact intelligence: yet another approach for intelligent robots","authors":"H. Takeda, K. Terada, T. Kawamura","doi":"10.1109/ROMAN.2002.1045618","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045618","url":null,"abstract":"In this paper, we propose a new concept for intelligence called artifact intelligence that can be another approach to realize intelligent robots. Artifact intelligence means intelligence for artifacts that fits its embodiment, i.e., structures and functions of artifacts. Artifact intelligence differs from natural intelligence in terms of intentionality and automated objects in autonomy. In order to realize artifact intelligence, we investigate two types of affordance with the prototype robots, i.e., active affordance with the autonomous mobile chair and emergent affordance with \"AgentBox\".","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116339842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045599
Takeshi Muto, Yoshihiro MIYAKEt
In this study, we describe the analysis of the co-emergence process on the human-human cooperation. Especially, we focus on the cooperative walk realized by the synchronization of the two persons' footsteps, and do the analysis of the dynamics. As a results, it was clarified that there was a relation of the mutual constraint between the step motion of the leg and the swing motion of the arm, and that the cycles with the mutual constraint between the arm and the leg synchronized each other through the entrainment of the footsteps' rhythm. In addition, it became clear that the arm dynamics were influenced by the attention and the legs dynamics were not done. Thus, on the cooperative walk, it was suggested that the arm has different dynamics from the leg dynamics, and the mutual constraint process between these dynamics realized the co-emergence process on the cooperative walk.
{"title":"Analysis of the co-emergence process on the human-human cooperation","authors":"Takeshi Muto, Yoshihiro MIYAKEt","doi":"10.1109/ROMAN.2002.1045599","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045599","url":null,"abstract":"In this study, we describe the analysis of the co-emergence process on the human-human cooperation. Especially, we focus on the cooperative walk realized by the synchronization of the two persons' footsteps, and do the analysis of the dynamics. As a results, it was clarified that there was a relation of the mutual constraint between the step motion of the leg and the swing motion of the arm, and that the cycles with the mutual constraint between the arm and the leg synchronized each other through the entrainment of the footsteps' rhythm. In addition, it became clear that the arm dynamics were influenced by the attention and the legs dynamics were not done. Thus, on the cooperative walk, it was suggested that the arm has different dynamics from the leg dynamics, and the mutual constraint process between these dynamics realized the co-emergence process on the cooperative walk.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121498715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045597
M. Okubo, T. Watanabe
Collaboration consists mainly of two tasks; one is each partner's task which is done by the individual, the other is communication with each other. Both of them are very important for the smooth collaboration. From the viewpoint, the collaboration support system should be a coalition between the operation support and the communication support. We have developed the 3D shape evaluation support system in virtual space and the embodied virtual communication system for the analysis by synthesis. The fusion of two system is necessary for the collaboration support system for 3D shape evaluation in virtual space. From the viewpoint of communication, the talkers tend to prefer the situation in which they can share embodied interaction by observing their interaction of avatars including themselves in the same virtual space. However, the motion images including themselves would discourage them from the shape evaluation. In this paper, the influence of difference of the point of view on the shape evaluation in virtual space is investigated using the proposed system by the sensory evaluation of paired comparison and the questionnaire analysis. It is found that the subjects prefer the point of view from which they can see only their forearms' motion for the shape evaluation in virtual space.
{"title":"Development of 3D shape evaluation system for collaboration support in virtual space","authors":"M. Okubo, T. Watanabe","doi":"10.1109/ROMAN.2002.1045597","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045597","url":null,"abstract":"Collaboration consists mainly of two tasks; one is each partner's task which is done by the individual, the other is communication with each other. Both of them are very important for the smooth collaboration. From the viewpoint, the collaboration support system should be a coalition between the operation support and the communication support. We have developed the 3D shape evaluation support system in virtual space and the embodied virtual communication system for the analysis by synthesis. The fusion of two system is necessary for the collaboration support system for 3D shape evaluation in virtual space. From the viewpoint of communication, the talkers tend to prefer the situation in which they can share embodied interaction by observing their interaction of avatars including themselves in the same virtual space. However, the motion images including themselves would discourage them from the shape evaluation. In this paper, the influence of difference of the point of view on the shape evaluation in virtual space is investigated using the proposed system by the sensory evaluation of paired comparison and the questionnaire analysis. It is found that the subjects prefer the point of view from which they can see only their forearms' motion for the shape evaluation in virtual space.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045670
S. Kajikawa, N. Saito, H. Okano
Robots are required to assist our activities in daily life. In this paper, we focus on designing an algorithm to realize "handing-over" motion as a cooperative task between a human and a robot. We plan human-like robot motion to receive an object handed by a human. First, we analyze the handing-over motion performed by two humans. From the experimental results, we extract some characteristics in hand trajectory, velocity profile, and arm posture. We then confirm that human-like motion can be produced using these characteristics. Finally, we plan the robot motion with an instantaneous optimal control method, which evaluates the error (regarding the relative position, velocity and direction of the tip of hand) and the energy cost (joint torque) at each sampling step. In this method, we adopt a time-varying weight matrix that relates to how to reduce the error (or how the robot approaches the human) in order to give the robot's motion human-like characteristics. Simulation results show the validity of the proposed method.
{"title":"Receiver robot's motion for handing-over with a human","authors":"S. Kajikawa, N. Saito, H. Okano","doi":"10.1109/ROMAN.2002.1045670","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045670","url":null,"abstract":"Robots are required to assist our activities in daily life. In this paper, we focus on designing an algorithm to realize \"handing-over\" motion as a cooperative task between a human and a robot. We plan human-like robot motion to receive an object handed by a human. First, we analyze the handing-over motion performed by two humans. From the experimental results, we extract some characteristics in hand trajectory, velocity profile, and arm posture. We then confirm that human-like motion can be produced using these characteristics. Finally, we plan the robot motion with an instantaneous optimal control method, which evaluates the error (regarding the relative position, velocity and direction of the tip of hand) and the energy cost (joint torque) at each sampling step. In this method, we adopt a time-varying weight matrix that relates to how to reduce the error (or how the robot approaches the human) in order to give the robot's motion human-like characteristics. Simulation results show the validity of the proposed method.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131698074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045674
José Manuel Hostalet Wandosell, Birgit Graf
This work presents a navigation method for a nonholonomic robot. The robot is used as a walking assistant with a person following to a specific target. During motion the user may modify the path computed by the robot. The method is based on a new model of "elastic bands" considering the requirements of a car-like robot moving only forward (Dubin's model).
{"title":"Non-holonomic navigation system of a walking-aid robot","authors":"José Manuel Hostalet Wandosell, Birgit Graf","doi":"10.1109/ROMAN.2002.1045674","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045674","url":null,"abstract":"This work presents a navigation method for a nonholonomic robot. The robot is used as a walking assistant with a person following to a specific target. During motion the user may modify the path computed by the robot. The method is based on a new model of \"elastic bands\" considering the requirements of a car-like robot moving only forward (Dubin's model).","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131048685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}