Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367747
S. Deb, D. K. Banerjee, D. Dutta Majumder
An algorithm for the recognition and localization of partially occluded objects is presented here. It is assumed that at least three corners, not necessarily consecutive corners, of all the objects present in the scene are visible. No restriction is made on the position and orientation of the object. For any particular object the position and rotation transformations are estimated by matching the triangles of the model and the scene. The ambiguity of the same triangle being present in more than one object model is resolved by a penalty function based on the area of mismatch. A new concept of feature ranking has been introduced so as to help the recognition algorithm in terms of within object variation as well as between object discriminability. It helps in reducing the number of initial hypothesis. A complete system has been designed and implemented and tested on a variety of scenes. The results clearly demonstrates the effectiveness of the proposed method.<>
{"title":"Ranking of features for classifying industrial objects","authors":"S. Deb, D. K. Banerjee, D. Dutta Majumder","doi":"10.1109/ROMAN.1993.367747","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367747","url":null,"abstract":"An algorithm for the recognition and localization of partially occluded objects is presented here. It is assumed that at least three corners, not necessarily consecutive corners, of all the objects present in the scene are visible. No restriction is made on the position and orientation of the object. For any particular object the position and rotation transformations are estimated by matching the triangles of the model and the scene. The ambiguity of the same triangle being present in more than one object model is resolved by a penalty function based on the area of mismatch. A new concept of feature ranking has been introduced so as to help the recognition algorithm in terms of within object variation as well as between object discriminability. It helps in reducing the number of initial hypothesis. A complete system has been designed and implemented and tested on a variety of scenes. The results clearly demonstrates the effectiveness of the proposed method.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130009968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367705
N. Mitsutake, K. Hoshiai, H. Igarashi, Y. Sugioka, Y. Yamamoto, K. Yamazaki, A. Yoshida, T. Yamaguchi
We proposed a new concept of medical care system, named the "hyper hospital" which is constructed in computer based electronic information network. It is built as a distributed system on the network and consists of all kinds of conventional medical care facilities in both the real and the imaginary spaces. The virtual space of the hyper hospital and private information are owned and exclusively controlled by the patient, thus the maximum protection of the privacy of the patients can be observed. In the hyper hospital, an innovative human interface must be required for the consultation, treatment and other communication to the medical staff. This is especially true if the users are of severely disabled patients. The purpose of the present study is to establish a new human interface based on the event related potential, which enables those kind of patients to communicate with others including medical care staffs without any physical actions.<>
{"title":"Open sesame from top of your head-an event related potential based interface for the control of the virtual reality system","authors":"N. Mitsutake, K. Hoshiai, H. Igarashi, Y. Sugioka, Y. Yamamoto, K. Yamazaki, A. Yoshida, T. Yamaguchi","doi":"10.1109/ROMAN.1993.367705","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367705","url":null,"abstract":"We proposed a new concept of medical care system, named the \"hyper hospital\" which is constructed in computer based electronic information network. It is built as a distributed system on the network and consists of all kinds of conventional medical care facilities in both the real and the imaginary spaces. The virtual space of the hyper hospital and private information are owned and exclusively controlled by the patient, thus the maximum protection of the privacy of the patients can be observed. In the hyper hospital, an innovative human interface must be required for the consultation, treatment and other communication to the medical staff. This is especially true if the users are of severely disabled patients. The purpose of the present study is to establish a new human interface based on the event related potential, which enables those kind of patients to communicate with others including medical care staffs without any physical actions.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125383654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367739
M. Bodur, A. Ersak
An 8 degree-or-freedom (DOF) redundant manipulator is designed and realized for uses in robot-human environments requiring obstacle avoidance. The extra two DOF provides the flexibility in kinematics for obstacle avoidance. The modular mechanical structure associates both simple mechanical construction and an easy forward kinematics solution. A motor-control module is implemented to perform a constant acceleration motion in accordance with the commands from a host computer. The inverse kinematics solution of the redundant manipulator is introduced by using the forward kinematics with the recursive least squares estimation (RLSE) method. The RLSE method is applied for the linearized model of the nonlinear kinematics around the operating point.<>
{"title":"Redundant manipulator for obstacle avoidance and inverse kinematics solution by least squares","authors":"M. Bodur, A. Ersak","doi":"10.1109/ROMAN.1993.367739","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367739","url":null,"abstract":"An 8 degree-or-freedom (DOF) redundant manipulator is designed and realized for uses in robot-human environments requiring obstacle avoidance. The extra two DOF provides the flexibility in kinematics for obstacle avoidance. The modular mechanical structure associates both simple mechanical construction and an easy forward kinematics solution. A motor-control module is implemented to perform a constant acceleration motion in accordance with the commands from a host computer. The inverse kinematics solution of the redundant manipulator is introduced by using the forward kinematics with the recursive least squares estimation (RLSE) method. The RLSE method is applied for the linearized model of the nonlinear kinematics around the operating point.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123167260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367710
M. Oda, T. Kato
An experimental evaluation of a facial image retrieval system showed that our previously proposed "context-driven retrieval mechanism" facilitated retrieval (or externalization) of ambiguous as well as better defined facial images. Some features, such as face shape, eyebrow tilt, and eye shape, were found to be more salient than others. The context-driven retrieval mechanism, while maintaining the relative importance of facial features, facilitated retrieval by reducing the variance in the less salient features.<>
{"title":"What kinds of facial features are used in face retrieval?","authors":"M. Oda, T. Kato","doi":"10.1109/ROMAN.1993.367710","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367710","url":null,"abstract":"An experimental evaluation of a facial image retrieval system showed that our previously proposed \"context-driven retrieval mechanism\" facilitated retrieval (or externalization) of ambiguous as well as better defined facial images. Some features, such as face shape, eyebrow tilt, and eye shape, were found to be more salient than others. The context-driven retrieval mechanism, while maintaining the relative importance of facial features, facilitated retrieval by reducing the variance in the less salient features.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123119718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367698
H. Kobayashi, T. Une
This paper proposes an autonomous eye robot which moves arbitrarily in the slave spaces, acquires appropriate visual informations and transmits them to the master sides. By watching these informations, the operators can easily understand the real situation of the slave side, and they can easily execute the tele-manipulation. The first part of this paper shows some examples to demonstrate the effects of visual informations in tele-operations. The second part proposes an autonomous eye robot and the control strategy to move the robot.<>
{"title":"An autonomous eye robot for tele-operation","authors":"H. Kobayashi, T. Une","doi":"10.1109/ROMAN.1993.367698","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367698","url":null,"abstract":"This paper proposes an autonomous eye robot which moves arbitrarily in the slave spaces, acquires appropriate visual informations and transmits them to the master sides. By watching these informations, the operators can easily understand the real situation of the slave side, and they can easily execute the tele-manipulation. The first part of this paper shows some examples to demonstrate the effects of visual informations in tele-operations. The second part proposes an autonomous eye robot and the control strategy to move the robot.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122610131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367735
J. Labiche, J. Ogier, R. Kocik, P. Santraine, R. Mullot, J. Caston
This paper deals with a cadastral map interpretation device. Our approach to solve this problem is based on vectorizing the image by extracting the lowest information level. It then reconstructs real cadastral entities, such as parcels, hatched parcels (buildings), blocks and roads, using the knowledge of the cadastral map. We adopt the fixed 2D object visual perception strategy.<>
{"title":"Human visual perception principles may be used to build intelligent pattern recognition softwares: application to French map interpretation","authors":"J. Labiche, J. Ogier, R. Kocik, P. Santraine, R. Mullot, J. Caston","doi":"10.1109/ROMAN.1993.367735","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367735","url":null,"abstract":"This paper deals with a cadastral map interpretation device. Our approach to solve this problem is based on vectorizing the image by extracting the lowest information level. It then reconstructs real cadastral entities, such as parcels, hatched parcels (buildings), blocks and roads, using the knowledge of the cadastral map. We adopt the fixed 2D object visual perception strategy.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"63 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131451536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367677
Y. Kitamura, H. Takemura, N. Ahuja, F. Kishino
We propose an efficient method for detecting interference and potential collisions among objects to facilitate cooperative work in a virtual space. The method consists of two main stages: 1) the coarse stage, an approximate test is performed to identify interfering objects in the entire workspace using octree representation of object shapes; and 2) the fine stage, polyhedral representation of object shapes is used to more accurately identify any object parts causing interference and collisions. For this purpose specific pairs of faces belonging to any of the interfering objects found in the first stage are tested, and detailed computation is performed on a reduced amount of data. The experimental results show a better efficiency for the proposed method especially when there are complicated objects in the environment, in comparison with the conventional collision detection method.<>
{"title":"Interference detection among objects for operator assistance in virtual cooperative workspace","authors":"Y. Kitamura, H. Takemura, N. Ahuja, F. Kishino","doi":"10.1109/ROMAN.1993.367677","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367677","url":null,"abstract":"We propose an efficient method for detecting interference and potential collisions among objects to facilitate cooperative work in a virtual space. The method consists of two main stages: 1) the coarse stage, an approximate test is performed to identify interfering objects in the entire workspace using octree representation of object shapes; and 2) the fine stage, polyhedral representation of object shapes is used to more accurately identify any object parts causing interference and collisions. For this purpose specific pairs of faces belonging to any of the interfering objects found in the first stage are tested, and detailed computation is performed on a reduced amount of data. The experimental results show a better efficiency for the proposed method especially when there are complicated objects in the environment, in comparison with the conventional collision detection method.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134350629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367699
K. Hirota, T. Tsurumaru, A. Motegi, M. Ohtani, N. Yubazaki, T. Miyajima
A successive learning control system based on a neural network technique with a genetic algorithm has been developed to simulate a human real-time learning process. As an application experiment, a 3 degree-of-freedom arm robot shoots a ball equipped with a CCD camera at an irregular moving basket. Where the hitting rate is improved by the successive learning control and the final value was 23% in average, 40% in maximum.<>
{"title":"A successive learning neuro control system shooting irregular moving object","authors":"K. Hirota, T. Tsurumaru, A. Motegi, M. Ohtani, N. Yubazaki, T. Miyajima","doi":"10.1109/ROMAN.1993.367699","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367699","url":null,"abstract":"A successive learning control system based on a neural network technique with a genetic algorithm has been developed to simulate a human real-time learning process. As an application experiment, a 3 degree-of-freedom arm robot shoots a ball equipped with a CCD camera at an irregular moving basket. Where the hitting rate is improved by the successive learning control and the final value was 23% in average, 40% in maximum.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115624356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367733
Y. Inagaki, H. Sugie, H. Aisu, T. Unemi
In this paper, we propose a concept of symbiotic robot system with human, as well as a method for intention inference from the human's behavior for intelligent robots who carry out a simple task cooperatively with human, without complex communication. The robots get instantaneous data about the environment, including human, from sensors, and translate them into qualitative expressions that contain vague time scale as follows: "He is stopping for a while", "He changed his direction just now". If the task is very simple and the goal is clear, a human behavior model can be expressed by fuzzy automata. In this model, a transition of each human's situation is controlled by fuzzy rules using qualitative expressions. We set up a simple cooperative task, and propose a human behavior model corresponding to this case.<>
{"title":"A study of a method for intention inference from human's behavior","authors":"Y. Inagaki, H. Sugie, H. Aisu, T. Unemi","doi":"10.1109/ROMAN.1993.367733","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367733","url":null,"abstract":"In this paper, we propose a concept of symbiotic robot system with human, as well as a method for intention inference from the human's behavior for intelligent robots who carry out a simple task cooperatively with human, without complex communication. The robots get instantaneous data about the environment, including human, from sensors, and translate them into qualitative expressions that contain vague time scale as follows: \"He is stopping for a while\", \"He changed his direction just now\". If the task is very simple and the goal is clear, a human behavior model can be expressed by fuzzy automata. In this model, a transition of each human's situation is controlled by fuzzy rules using qualitative expressions. We set up a simple cooperative task, and propose a human behavior model corresponding to this case.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115681978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367686
Qinghua Li, A. Takanishi, I. Kato
In this paper, the authors devise a method that a biped walking robot realizes a learning process of walking stabilization with the cooperation of a human, by considering the learning process of a baby. The authors developed a biped walking robot which has a trunk. They have equipped the robot with a system for measuring the zero moment point (ZMP) of the robot by using universal force-moment sensors and with a mechanical interface for the cooperation. The authors did walking experiments with the biped walking robot, in which a human directly taught the robot the desired ZMP which makes the walking be stable, and the robot learned to stabilize its walking with the cooperation of the human. As a result, the robot obtained the given desired stability by modifying its trunk motion and it achieved stable walking by itself through the process of several learning trials. This kind of function is integral to an anthropomorphic robot, which the authors call a Humanoid, being a human's partner in the future.<>
{"title":"Learning of robot biped walking with the cooperation of a human","authors":"Qinghua Li, A. Takanishi, I. Kato","doi":"10.1109/ROMAN.1993.367686","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367686","url":null,"abstract":"In this paper, the authors devise a method that a biped walking robot realizes a learning process of walking stabilization with the cooperation of a human, by considering the learning process of a baby. The authors developed a biped walking robot which has a trunk. They have equipped the robot with a system for measuring the zero moment point (ZMP) of the robot by using universal force-moment sensors and with a mechanical interface for the cooperation. The authors did walking experiments with the biped walking robot, in which a human directly taught the robot the desired ZMP which makes the walking be stable, and the robot learned to stabilize its walking with the cooperation of the human. As a result, the robot obtained the given desired stability by modifying its trunk motion and it achieved stable walking by itself through the process of several learning trials. This kind of function is integral to an anthropomorphic robot, which the authors call a Humanoid, being a human's partner in the future.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114576214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}