Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045633
M. Gasson, B. Hutt, I. Goodhew, P. Kyberd, K. Warwick
This paper presents an application study into the use of a bi-directional link with the human nervous system by means of an implant, positioned through neurosurgery. Various applications are described including the interaction of neural signals with an articulated hand, a group of cooperative autonomous robots and to control the movement of a mobile platform. The microelectrode array implant itself is described in detail. Consideration is given to a wider range of possible robot mechanisms, which could interact with the human nervous system through the same technique.
{"title":"Bi-directional human machine interface via direct neural connection","authors":"M. Gasson, B. Hutt, I. Goodhew, P. Kyberd, K. Warwick","doi":"10.1109/ROMAN.2002.1045633","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045633","url":null,"abstract":"This paper presents an application study into the use of a bi-directional link with the human nervous system by means of an implant, positioned through neurosurgery. Various applications are described including the interaction of neural signals with an articulated hand, a group of cooperative autonomous robots and to control the movement of a mobile platform. The microelectrode array implant itself is described in detail. Consideration is given to a wider range of possible robot mechanisms, which could interact with the human nervous system through the same technique.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114801383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045610
A. Frisoli, M. Bergamasco
This paper presents different conditions for unconditional stability of the interaction of human operators with haptic interface systems. Criteria for the unconditional stability have been theoretically derived and experimentally assessed on the isotropic force display. A good match has been observed between theoretical predictions and real performance.
{"title":"Absolute stable haptic interaction with the isotropic force display","authors":"A. Frisoli, M. Bergamasco","doi":"10.1109/ROMAN.2002.1045610","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045610","url":null,"abstract":"This paper presents different conditions for unconditional stability of the interaction of human operators with haptic interface systems. Criteria for the unconditional stability have been theoretically derived and experimentally assessed on the isotropic force display. A good match has been observed between theoretical predictions and real performance.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117352964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045672
K. Kamejima
One of the essential capabilities of 'real world intelligence', whether developed naturally or designed artificially, is to generate feasible operations based on innate belief in real world. As cognitive basis of the real world intelligence, visual perception organizes randomly distributed image features into environment features: well structured visibles available as consistent cues to subsequent decisions. Such phenomenal supervenience to reality plays a crucial role in implementing cooperative systems intended for field automation, vehicle-roadway networking, community restoration from disaster, and interactive education, e.g. in generating consistent decisions, partial knowledge of the environment should be adapted intentionally to encountered scene prior to the comprehension of the situations. Such selfreference structure, however, yields serious contradiction in understanding natural perception mechanisms and/or implementing artificial vision systems. In this paper directional Fourier transform was applied to extract maneuvering affordance in noisy imagery. By identifying the brightness distribution of observed patterns with the invariant measure of unknown fractal attractor, noise levels were estimated for extracting affordance pattern. The detectability of affordance patterns has been verified through experimental studies.
{"title":"Fractal representation of image feature associated with maneuvering affordance","authors":"K. Kamejima","doi":"10.1109/ROMAN.2002.1045672","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045672","url":null,"abstract":"One of the essential capabilities of 'real world intelligence', whether developed naturally or designed artificially, is to generate feasible operations based on innate belief in real world. As cognitive basis of the real world intelligence, visual perception organizes randomly distributed image features into environment features: well structured visibles available as consistent cues to subsequent decisions. Such phenomenal supervenience to reality plays a crucial role in implementing cooperative systems intended for field automation, vehicle-roadway networking, community restoration from disaster, and interactive education, e.g. in generating consistent decisions, partial knowledge of the environment should be adapted intentionally to encountered scene prior to the comprehension of the situations. Such selfreference structure, however, yields serious contradiction in understanding natural perception mechanisms and/or implementing artificial vision systems. In this paper directional Fourier transform was applied to extract maneuvering affordance in noisy imagery. By identifying the brightness distribution of observed patterns with the invariant measure of unknown fractal attractor, noise levels were estimated for extracting affordance pattern. The detectability of affordance patterns has been verified through experimental studies.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121299702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045658
A. Camurri, P. Coletta, B. Mazzarino, R. Trocca, G. Volpe
In this paper our recent development in the research of computational models and algorithms for the real-time analysis of full-body human movement are presented. Our aim is to find methods and techniques to extract cues relevant to KANSEI and emotional content in human expressive gesture in real time. Analysis of expressiveness in human gestures can contribute to new paradigms for the design of improved human-robot interfaces. As a main concrete result of our research work, a software platform named EyesWeb has been developed and is distributed for free (www.eyesweb.org). EyesWeb supports research in multimodal interaction, and provides a concrete tool for developing real-time interactive applications. Human movement analysis is provided by means of a library of algorithms for sensors and video processing, features extraction, gesture segmentation, etc. A visual environment is provided to compose such basic algorithms in order to develop more sophisticated analysis techniques.
{"title":"Improving the man-machine interface through the analysis of expressiveness in human movement","authors":"A. Camurri, P. Coletta, B. Mazzarino, R. Trocca, G. Volpe","doi":"10.1109/ROMAN.2002.1045658","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045658","url":null,"abstract":"In this paper our recent development in the research of computational models and algorithms for the real-time analysis of full-body human movement are presented. Our aim is to find methods and techniques to extract cues relevant to KANSEI and emotional content in human expressive gesture in real time. Analysis of expressiveness in human gestures can contribute to new paradigms for the design of improved human-robot interfaces. As a main concrete result of our research work, a software platform named EyesWeb has been developed and is distributed for free (www.eyesweb.org). EyesWeb supports research in multimodal interaction, and provides a concrete tool for developing real-time interactive applications. Human movement analysis is provided by means of a library of algorithms for sensors and video processing, features extraction, gesture segmentation, etc. A visual environment is provided to compose such basic algorithms in order to develop more sophisticated analysis techniques.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123901303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045667
S. Estable, I. Ahms, H. Backhaus, O. El Zubi, R. Muenstermann
The increased use of production assistants will allow new factory requirements to be fulfilled like the production of small series, the reduction of innovation cycles and the optimization of factory workload. The possible components of such a production assistant, dedicated to object manipulation tasks, has been investigated by Astrium in the project MORPHA. Two features seem to describe such an assistant system: intuitive teaching and surveillance. Thus, three main components have been specified and implemented: pose estimation skills, intuitive trajectory generation and surveillance for workspace sharing. These components are described and the results evaluated.
{"title":"Intuitive teaching and surveillance for production assistants","authors":"S. Estable, I. Ahms, H. Backhaus, O. El Zubi, R. Muenstermann","doi":"10.1109/ROMAN.2002.1045667","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045667","url":null,"abstract":"The increased use of production assistants will allow new factory requirements to be fulfilled like the production of small series, the reduction of innovation cycles and the optimization of factory workload. The possible components of such a production assistant, dedicated to object manipulation tasks, has been investigated by Astrium in the project MORPHA. Two features seem to describe such an assistant system: intuitive teaching and surveillance. Thus, three main components have been specified and implemented: pose estimation skills, intuitive trajectory generation and surveillance for workspace sharing. These components are described and the results evaluated.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124639572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045630
M. Haage, S. schotz, P. Nugues
Speech recognition is available on ordinary personal computers and is starting to appear in standard software applications. A known problem with speech interfaces is their integration into current graphical user interfaces. This paper reports on a prototype developed for studying integration of speech into graphical interfaces aimed towards programming of industrial robot arms. The aim of the prototype is to develop a speech system for designing robot trajectories that would fit well with current CAD paradigms.
{"title":"A prototype robot speech interface with multimodal feedback","authors":"M. Haage, S. schotz, P. Nugues","doi":"10.1109/ROMAN.2002.1045630","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045630","url":null,"abstract":"Speech recognition is available on ordinary personal computers and is starting to appear in standard software applications. A known problem with speech interfaces is their integration into current graphical user interfaces. This paper reports on a prototype developed for studying integration of speech into graphical interfaces aimed towards programming of industrial robot arms. The aim of the prototype is to develop a speech system for designing robot trajectories that would fit well with current CAD paradigms.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127964072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045627
R. Moratz, T. Tenbrink
Many tasks in the field of service robotics could benefit from a natural language interface that allows human users to talk to the robot as naturally as possible. However, so far we lack information about what would be natural to human users, as most experimental robotic systems involving natural language developed so far have not been systematically tested with human users unfamiliar with the system. In our simple scenario, human users refer to objects via their location rather than feature descriptions. Our robot uses a computational model of spatial reference to interpret the linguistic instructions. In experiments with naive users we test the adequacy of the model for achieving joint spatial reference. We show how our approach can be extended to more complex spatial tasks in natural human-robot interaction.
{"title":"Natural language instructions for joint spatial reference between naive users and a mobile robot","authors":"R. Moratz, T. Tenbrink","doi":"10.1109/ROMAN.2002.1045627","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045627","url":null,"abstract":"Many tasks in the field of service robotics could benefit from a natural language interface that allows human users to talk to the robot as naturally as possible. However, so far we lack information about what would be natural to human users, as most experimental robotic systems involving natural language developed so far have not been systematically tested with human users unfamiliar with the system. In our simple scenario, human users refer to objects via their location rather than feature descriptions. Our robot uses a computational model of spatial reference to interpret the linguistic instructions. In experiments with naive users we test the adequacy of the model for achieving joint spatial reference. We show how our approach can be extended to more complex spatial tasks in natural human-robot interaction.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129093010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045623
A. Matsikis, T. Zoumpoulidis, F.H. Broicher, K. Kraiss
In this paper a method for learning object-specific vision-based manipulation is described. The proposed approach uses a virtual environment containing models of the objects and the manipulator with an eye-in-hand camera to simplify and automate the training procedure. An object with a form that requires a unique final gripper position and orientation was used to train and test the implemented algorithms. A series of smooth paths leading to the final position are generated based on a typical path defined by an operator. Images and corresponding manipulator positions along the produced paths are gathered in the virtual environment and used for the training of a vision-based controller. The controller uses a structure of radial-basis function (RBF) networks and has to execute a long reaching movement that guides the manipulator to the final position so that afterwards only minor justification of the gripper is needed to complete the grasp.
{"title":"Learning object-specific vision-based manipulation in virtual environments","authors":"A. Matsikis, T. Zoumpoulidis, F.H. Broicher, K. Kraiss","doi":"10.1109/ROMAN.2002.1045623","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045623","url":null,"abstract":"In this paper a method for learning object-specific vision-based manipulation is described. The proposed approach uses a virtual environment containing models of the objects and the manipulator with an eye-in-hand camera to simplify and automate the training procedure. An object with a form that requires a unique final gripper position and orientation was used to train and test the implemented algorithms. A series of smooth paths leading to the final position are generated based on a typical path defined by an operator. Images and corresponding manipulator positions along the produced paths are gathered in the virtual environment and used for the training of a vision-based controller. The controller uses a structure of radial-basis function (RBF) networks and has to execute a long reaching movement that guides the manipulator to the final position so that afterwards only minor justification of the gripper is needed to complete the grasp.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132368780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045654
I. Iossifidis, C. Bruckhoff, C. Theis, C. Grote, C. Faubel, G. Schoner
We describe the general concept, system architecture, hardware, and the behavioral abilities of CORA (Cooperative Robot Assistant), an autonomous nonmobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed CORA anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although CORA was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that CORA's behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.
{"title":"CORA: An anthropomorphic robot assistant for human environment","authors":"I. Iossifidis, C. Bruckhoff, C. Theis, C. Grote, C. Faubel, G. Schoner","doi":"10.1109/ROMAN.2002.1045654","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045654","url":null,"abstract":"We describe the general concept, system architecture, hardware, and the behavioral abilities of CORA (Cooperative Robot Assistant), an autonomous nonmobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed CORA anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although CORA was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that CORA's behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121230966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ROMAN.2002.1045602
K. Kawamura
Partnership between a human and robot could be enhanced if the robot were intelligent enough to understand human intention and adapt its behavior. In this paper, we will describe a multi-agent framework for robot control and human-robot interaction. Cognitive agent models called the Self Agent and the Human Agent are being developed to achieve this goal.
{"title":"The role of cognitive agent models in a multi-agent framework for human-humanoid interaction","authors":"K. Kawamura","doi":"10.1109/ROMAN.2002.1045602","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045602","url":null,"abstract":"Partnership between a human and robot could be enhanced if the robot were intelligent enough to understand human intention and adapt its behavior. In this paper, we will describe a multi-agent framework for robot control and human-robot interaction. Cognitive agent models called the Self Agent and the Human Agent are being developed to achieve this goal.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"os-30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127772797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}