Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367726
P. Bhatia, A. Murakami, M. Uchiyama
In this paper, we discuss the design of a human interface to the local intelligence unit. We map our environment consisting of the slave robot and the objects around it into the configuration space by using a mathematical transformation in which every revolute joint angle is mapped as /sub xi/=tan /spl theta//sub i//2 leading us to develop the free space in term of logical operations on polynomial entities. Configuration space could be the "joint variables space" or the "transformed joint variables" as in our methodology. It is possible to detect collision of one dimensional paths parameterized by polynomials in a variable s. The sampled points of the path input by the operator through the master arm is not necessarily collision free. The local intelligence unit checks for collisions and reports back to the operator the colliding segments, if any. To achieve this feature the input path has to to be interpreted appropriately for collision detection to be performed. The proper curve fitting of the input path such that the collision detection can be performed is also studied.<>
{"title":"Shared intelligence for telerobots: human interface with local intelligence","authors":"P. Bhatia, A. Murakami, M. Uchiyama","doi":"10.1109/ROMAN.1993.367726","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367726","url":null,"abstract":"In this paper, we discuss the design of a human interface to the local intelligence unit. We map our environment consisting of the slave robot and the objects around it into the configuration space by using a mathematical transformation in which every revolute joint angle is mapped as /sub xi/=tan /spl theta//sub i//2 leading us to develop the free space in term of logical operations on polynomial entities. Configuration space could be the \"joint variables space\" or the \"transformed joint variables\" as in our methodology. It is possible to detect collision of one dimensional paths parameterized by polynomials in a variable s. The sampled points of the path input by the operator through the master arm is not necessarily collision free. The local intelligence unit checks for collisions and reports back to the operator the colliding segments, if any. To achieve this feature the input path has to to be interpreted appropriately for collision detection to be performed. The proper curve fitting of the input path such that the collision detection can be performed is also studied.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125198819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367687
S. Tadokoro, T. Takebe, Y. Ishikawa, T. Takamori
The authors propose a control model for human cooperative robots. In this model, the future human position is predicted on the basis of the measured human motion by a human recognition system. Robot trajectories are modified to improve safety which is computed using the prediction result. In this paper, a prediction method of stochastic process is adopted for the control model. In a room which is divided into square cells, a human state variable (cell number, direction and speed of motion) is stochastically made transitions as a Markov process. Simulation was performed for a room where a man and a robot are working together. The results demonstrated that the stochastic prediction is very effective for planning robot trajectories against danger, by which the robot can predict danger much earlier than by using the deterministic prediction method.<>
{"title":"Control of human cooperative robots based on stochastic prediction of human motion","authors":"S. Tadokoro, T. Takebe, Y. Ishikawa, T. Takamori","doi":"10.1109/ROMAN.1993.367687","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367687","url":null,"abstract":"The authors propose a control model for human cooperative robots. In this model, the future human position is predicted on the basis of the measured human motion by a human recognition system. Robot trajectories are modified to improve safety which is computed using the prediction result. In this paper, a prediction method of stochastic process is adopted for the control model. In a room which is divided into square cells, a human state variable (cell number, direction and speed of motion) is stochastically made transitions as a Markov process. Simulation was performed for a room where a man and a robot are working together. The results demonstrated that the stochastic prediction is very effective for planning robot trajectories against danger, by which the robot can predict danger much earlier than by using the deterministic prediction method.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125339125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367731
M. Kitazawa, Y. Sakai
A human uses both image and language to represent his own idea and to communicate with others. He may not be conscious of this fact, but images and words are translated to each other frequently in his brain when he is thinking. The functions of such translation and understanding are essential in constructing intelligent man-machine interfaces. It is also important to pick images and words in accordance with the context given. Here in this paper, described is the architecture of such an intelligent scene recognition system by using computer networking. Information about a scene inputted is sent among four computers and that scene is represented in some suitable form using images and words as a human often does.<>
{"title":"Intelligent scene recognition by personal computer networking","authors":"M. Kitazawa, Y. Sakai","doi":"10.1109/ROMAN.1993.367731","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367731","url":null,"abstract":"A human uses both image and language to represent his own idea and to communicate with others. He may not be conscious of this fact, but images and words are translated to each other frequently in his brain when he is thinking. The functions of such translation and understanding are essential in constructing intelligent man-machine interfaces. It is also important to pick images and words in accordance with the context given. Here in this paper, described is the architecture of such an intelligent scene recognition system by using computer networking. Information about a scene inputted is sent among four computers and that scene is represented in some suitable form using images and words as a human often does.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129406108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367685
Kazuhiro Kosuge, H. Yoshida, Toshio Fukuda
This paper proposes a new robotic system, which consists of multiple robots and executes a task in cooperation with humans. The authors consider a task in which the robots and humans manipulate an object in coordination. The authors assume that no interactions occur among robots and humans, that is, robots and humans have interactions with each other only through the object. The authors also assume that the humans are manipulating the object around a point fixed to the object. Under these assumptions, the authors design a controller for each robot around the point so that the object has a prescribed passive dynamics around the point. The stability of the resultant system is assured based on Popov's hyper stability theorem under the assumption that the passivity conditions for the humans are satisfied. The resultant control algorithm is applied to an experimental system, which consists of two industrial manipulators and a human. The results illustrate the proposed control algorithm.<>
{"title":"Dynamic control for robot-human collaboration","authors":"Kazuhiro Kosuge, H. Yoshida, Toshio Fukuda","doi":"10.1109/ROMAN.1993.367685","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367685","url":null,"abstract":"This paper proposes a new robotic system, which consists of multiple robots and executes a task in cooperation with humans. The authors consider a task in which the robots and humans manipulate an object in coordination. The authors assume that no interactions occur among robots and humans, that is, robots and humans have interactions with each other only through the object. The authors also assume that the humans are manipulating the object around a point fixed to the object. Under these assumptions, the authors design a controller for each robot around the point so that the object has a prescribed passive dynamics around the point. The stability of the resultant system is assured based on Popov's hyper stability theorem under the assumption that the passivity conditions for the humans are satisfied. The resultant control algorithm is applied to an experimental system, which consists of two industrial manipulators and a human. The results illustrate the proposed control algorithm.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128396718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367727
S. Inoue, T. Ojika, M. Harayama, T. Kobayashi, T. Imai
This paper proposes a two dimensional control and obstacle avoidance methods using a virtual plane in the environment where the virtual space that generates computer graphics is superimposed on the real space in which a hand robot actually exists. In the present method the human operator, having stereo scope scene given by two cameras, controls the end effector of the hand robot using only a mouse (and a cursor on the computer display), and two planes drawn in the virtual space which corresponds to the real space. That is, once the human operator, by using only the mouse, locates the arbitrary point on the virtual plane, then the controls of 3D position and pitch angle of the end effector are possible. Furthermore, obstacle avoidance in the task environment is possible by using this system. Since it has a 3D measurement system, we are able to avoid the obstacles by displaying the correlation between the virtual plane and obstacles.<>
{"title":"Two dimensional control for 6-DOF hand robot teleoperator","authors":"S. Inoue, T. Ojika, M. Harayama, T. Kobayashi, T. Imai","doi":"10.1109/ROMAN.1993.367727","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367727","url":null,"abstract":"This paper proposes a two dimensional control and obstacle avoidance methods using a virtual plane in the environment where the virtual space that generates computer graphics is superimposed on the real space in which a hand robot actually exists. In the present method the human operator, having stereo scope scene given by two cameras, controls the end effector of the hand robot using only a mouse (and a cursor on the computer display), and two planes drawn in the virtual space which corresponds to the real space. That is, once the human operator, by using only the mouse, locates the arbitrary point on the virtual plane, then the controls of 3D position and pitch angle of the end effector are possible. Furthermore, obstacle avoidance in the task environment is possible by using this system. Since it has a 3D measurement system, we are able to avoid the obstacles by displaying the correlation between the virtual plane and obstacles.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116532095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367676
F. Arai, T. Fukuda, Y. Yamamoto, T. Naito, T. Matsui
This paper considers the interactive relationship that both the user and machine try to adapt to each other. We call this relationship as the interactive adaptation. The interactive adaptation interface (IAI/F) is an intelligent interface which is designed under consideration of the interactive adaptation. This interface changes the characteristics of the system according to the given task, considering the state of the user such as the skill level, technique, characteristics, physical condition, etc. We propose a design and realization method of IAI/F which is based on the recursive fuzzy reasoning. We present the system considering the interactive adaptation, and discuss its effectiveness based on the experimental results.<>
{"title":"Interactive adaptation interface monitoring and assisting operator by recursive fuzzy criterion","authors":"F. Arai, T. Fukuda, Y. Yamamoto, T. Naito, T. Matsui","doi":"10.1109/ROMAN.1993.367676","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367676","url":null,"abstract":"This paper considers the interactive relationship that both the user and machine try to adapt to each other. We call this relationship as the interactive adaptation. The interactive adaptation interface (IAI/F) is an intelligent interface which is designed under consideration of the interactive adaptation. This interface changes the characteristics of the system according to the given task, considering the state of the user such as the skill level, technique, characteristics, physical condition, etc. We propose a design and realization method of IAI/F which is based on the recursive fuzzy reasoning. We present the system considering the interactive adaptation, and discuss its effectiveness based on the experimental results.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127948003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367752
T. Takeuchi
How integration of information from various sensory systems occurs is one of the most difficult challenges in understanding human and robot perception and cognition. The problem of auditory-visual integration is defined as a correspondence problem. A motion based integration schema of auditory-visual information is proposed as a solution to the correspondence problem between perceived auditory and visual space. In this schema, the motion attracted from auditory and visual information is combined to generate a complete perception of the world. The results form psychophysical experiments using auditory localization tasks support the author's hypothesis.<>
{"title":"Motion based integration of auditory-visual information","authors":"T. Takeuchi","doi":"10.1109/ROMAN.1993.367752","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367752","url":null,"abstract":"How integration of information from various sensory systems occurs is one of the most difficult challenges in understanding human and robot perception and cognition. The problem of auditory-visual integration is defined as a correspondence problem. A motion based integration schema of auditory-visual information is proposed as a solution to the correspondence problem between perceived auditory and visual space. In this schema, the motion attracted from auditory and visual information is combined to generate a complete perception of the world. The results form psychophysical experiments using auditory localization tasks support the author's hypothesis.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131219934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367755
K. Nakatani
We made a psychological review on the communication between a man and an artificial big system, and emphasized the necessity of farther interdisciplinary approach to the sensibility processing of both human and artificial system. As the old two factors theory of intelligence pointed out, the sensibility is originally an elemental function for biological adaptation to the environment. It has different aspects for every sensory modality, and the large variance of the individual difference is essential characteristic. Perception of objects and psychological space was discussed as a most early and primitive sensibility. Under the notion that the perception is a problem solving process, we reviewed models of visual perception and proposed a canvas model, which assume information integration by unconscious attention. 3-dimensional space perception was discussed as a result of information reduction by visual system. Biological motion perception proved a hierarchical structure just as knowledge system. Inner dialog mechanism between two arrays of composing units would be essential for the adaptive information processing of the big system.<>
{"title":"Big systems and sensibility of space perception","authors":"K. Nakatani","doi":"10.1109/ROMAN.1993.367755","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367755","url":null,"abstract":"We made a psychological review on the communication between a man and an artificial big system, and emphasized the necessity of farther interdisciplinary approach to the sensibility processing of both human and artificial system. As the old two factors theory of intelligence pointed out, the sensibility is originally an elemental function for biological adaptation to the environment. It has different aspects for every sensory modality, and the large variance of the individual difference is essential characteristic. Perception of objects and psychological space was discussed as a most early and primitive sensibility. Under the notion that the perception is a problem solving process, we reviewed models of visual perception and proposed a canvas model, which assume information integration by unconscious attention. 3-dimensional space perception was discussed as a result of information reduction by visual system. Biological motion perception proved a hierarchical structure just as knowledge system. Inner dialog mechanism between two arrays of composing units would be essential for the adaptive information processing of the big system.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128302710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367753
G. Ishimura, F. Pollick
When you try to decide whether you achieved an aim of any movement or not, you must take into account some "tolerance range" about the result. If the range is essential and important generally for human movements, you can think that a man can recognize this information prior to the movement onset and fully utilize it to program motor commands. For example, during a volleyball game, an excellent attacker could immediately recognize the "tolerance range for attacking" located among blockers and receivers in the opposite side and converge his attacking movement toward it. Another example is the reaching movement to a visually presented target. In this case, the size of the target implies an error tolerance range given (allowed) to the movement. Although you can find many kinds of movements related to that tolerance range, from the analytical point of view, experimental tasks should be simple enough to describe the results clearly.<>
{"title":"Information utilization of motor-error tolerance range in visuo-motor system","authors":"G. Ishimura, F. Pollick","doi":"10.1109/ROMAN.1993.367753","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367753","url":null,"abstract":"When you try to decide whether you achieved an aim of any movement or not, you must take into account some \"tolerance range\" about the result. If the range is essential and important generally for human movements, you can think that a man can recognize this information prior to the movement onset and fully utilize it to program motor commands. For example, during a volleyball game, an excellent attacker could immediately recognize the \"tolerance range for attacking\" located among blockers and receivers in the opposite side and converge his attacking movement toward it. Another example is the reaching movement to a visually presented target. In this case, the size of the target implies an error tolerance range given (allowed) to the movement. Although you can find many kinds of movements related to that tolerance range, from the analytical point of view, experimental tasks should be simple enough to describe the results clearly.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127899867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367714
B. Balasuriya, P. Hoole
Along the North-West (NW) coast of Sri Lanka exists one of the busiest shipping routes of the nation. Both commercial boats transporting civilians and fishing boats frequently travel along this coast line. The coastal waters are subject to severe winds and strong turbulence. This paper reports a highly interactive neural network controller which convincingly handles turbulent conditions. The simulation results presented in this paper focus on the sailor to computer controller communication during fair weather ship steering and computer controller to sailor communication for heavy weather ship steering. From the visual presentation of the computer decisions, the sailor is able to learn from the computer controller. The real time visualisation routine presented here will give the sailor some direct feeling as to how the neural controller handles turbulence.<>
{"title":"Communicating a sailor's experience to a computer controller for ship steering","authors":"B. Balasuriya, P. Hoole","doi":"10.1109/ROMAN.1993.367714","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367714","url":null,"abstract":"Along the North-West (NW) coast of Sri Lanka exists one of the busiest shipping routes of the nation. Both commercial boats transporting civilians and fishing boats frequently travel along this coast line. The coastal waters are subject to severe winds and strong turbulence. This paper reports a highly interactive neural network controller which convincingly handles turbulent conditions. The simulation results presented in this paper focus on the sailor to computer controller communication during fair weather ship steering and computer controller to sailor communication for heavy weather ship steering. From the visual presentation of the computer decisions, the sailor is able to learn from the computer controller. The real time visualisation routine presented here will give the sailor some direct feeling as to how the neural controller handles turbulence.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129266453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}