Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367737
M. Ishihara, J. Shiratakj
In this paper, we investigated if the distance can be judged by mutually comparing sound from a source with that from another. We further intended to correlate this sound comparison with the relative distance. It was made clear from the result that the sound loudness exhibited good correlation with the distance. In other words, the distance can be estimated by comparing the sound heard with the standard sound. As for the relation between the distance and the sound heard by the examinee, the consistency index was good and the correlation was consistent. The sound loudness sharply fell with distance, down to about 1 m, and slowly decreased beyond this distance.<>
{"title":"Relation between distance and pressure of acoustic signal","authors":"M. Ishihara, J. Shiratakj","doi":"10.1109/ROMAN.1993.367737","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367737","url":null,"abstract":"In this paper, we investigated if the distance can be judged by mutually comparing sound from a source with that from another. We further intended to correlate this sound comparison with the relative distance. It was made clear from the result that the sound loudness exhibited good correlation with the distance. In other words, the distance can be estimated by comparing the sound heard with the standard sound. As for the relation between the distance and the sound heard by the examinee, the consistency index was good and the correlation was consistent. The sound loudness sharply fell with distance, down to about 1 m, and slowly decreased beyond this distance.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125052719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367734
R. Kamejima, T. Aoki, Y. C. Watanabe
A non-deterministic coding scheme is presented for interactive scene analysis. Knowledge about the environment to be encountered is formally described in terms of events and objects. The semantics of the formal event-object system is supported by a structural image, that is, an environment context model. The object symbol is evoked in response to observation and coded through the computation of an invariant structural image. During the coding process, object symbols generate an indicatable version of open knowledge for non-artificial environment.<>
{"title":"Open Logic Machine-II: direct object coding for environment knowledge projection","authors":"R. Kamejima, T. Aoki, Y. C. Watanabe","doi":"10.1109/ROMAN.1993.367734","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367734","url":null,"abstract":"A non-deterministic coding scheme is presented for interactive scene analysis. Knowledge about the environment to be encountered is formally described in terms of events and objects. The semantics of the formal event-object system is supported by a structural image, that is, an environment context model. The object symbol is evoked in response to observation and coded through the computation of an invariant structural image. During the coding process, object symbols generate an indicatable version of open knowledge for non-artificial environment.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123936897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367700
Y. Katayama, Y. Nanjo, K. Shimokura
The new motion control system described in this paper has an event-driven motion-module switching mechanism. This mechanism can modify a reference input in real-time and can, for each event, select a previously prepared motion-module according to sensor information. This motion-compensating mechanism is effective in robot tasks with uncertainties. The highly modular and extendable control system may be useful for various robot tasks such as machining and assembling. This paper describes the concept and implementation of the proposed system and presents some experimental results demonstrating its feasibility.<>
{"title":"A motion control system with event-driven motion-module switching mechanism far robotic manipulators","authors":"Y. Katayama, Y. Nanjo, K. Shimokura","doi":"10.1109/ROMAN.1993.367700","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367700","url":null,"abstract":"The new motion control system described in this paper has an event-driven motion-module switching mechanism. This mechanism can modify a reference input in real-time and can, for each event, select a previously prepared motion-module according to sensor information. This motion-compensating mechanism is effective in robot tasks with uncertainties. The highly modular and extendable control system may be useful for various robot tasks such as machining and assembling. This paper describes the concept and implementation of the proposed system and presents some experimental results demonstrating its feasibility.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123918659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367703
T. Okada, T. Kuroda, Y. Hoshino, T. Shintani, H. Seki, H. Itoh, T. Law
In systems which complete a goal by agent cooperation, i.e. a multi-agent system, the communication between agents is a very important problem. To solve a problem by the agent cooperation, the communication between agents is necessary and indispensable. We introduce a model which consists of many agents. These agents have limited communication ability. They complete a goal in cooperation with each other. This model is the extension of the ant algorithm. In this model, it is possible that the communication between the agents may fail. The agents have to communicate differently using traditional methods; the agents have some communication abilities, called the sensing communication abilities. We evaluate the condition of cooperation between agents in the limited communication environment and the effect of the communication methods.<>
{"title":"An evaluation of multi-agent-behavior in a sensing communication world","authors":"T. Okada, T. Kuroda, Y. Hoshino, T. Shintani, H. Seki, H. Itoh, T. Law","doi":"10.1109/ROMAN.1993.367703","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367703","url":null,"abstract":"In systems which complete a goal by agent cooperation, i.e. a multi-agent system, the communication between agents is a very important problem. To solve a problem by the agent cooperation, the communication between agents is necessary and indispensable. We introduce a model which consists of many agents. These agents have limited communication ability. They complete a goal in cooperation with each other. This model is the extension of the ant algorithm. In this model, it is possible that the communication between the agents may fail. The agents have to communicate differently using traditional methods; the agents have some communication abilities, called the sensing communication abilities. We evaluate the condition of cooperation between agents in the limited communication environment and the effect of the communication methods.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121512084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367756
H. Kazerooni
A human's ability to perform physical tasks is limited by physical strength, not by intelligence. The author defines "extenders" as a class of robot manipulators worn by humans to augment human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The author's research objective is to determine the ground rules for the control of robotic systems worn by humans through the design, construction, and control of several prototype experimental direct-drive/non-direct-drive multi-degree-of-freedom hydraulic/electric extenders. The design of extenders is different from the design of conventional robots because the extender interfaces with the human on a physical level. Two sets of force sensors measure the forces imposed on the extender by the human and by the environment (i.e., the load). The extender's compliances in response to such contact forces were designed by selecting appropriate force compensators. The stability of the system of human, extender, and object being manipulated was analyzed. A mathematical expression for the extender performance was determined to quantify the force augmentation. Experimental studies on the control and performance of the experimental extender were conducted to verify the theoretical predictions.<>
{"title":"Extender: a case study for human-robot interaction via transfer of power and information signals","authors":"H. Kazerooni","doi":"10.1109/ROMAN.1993.367756","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367756","url":null,"abstract":"A human's ability to perform physical tasks is limited by physical strength, not by intelligence. The author defines \"extenders\" as a class of robot manipulators worn by humans to augment human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The author's research objective is to determine the ground rules for the control of robotic systems worn by humans through the design, construction, and control of several prototype experimental direct-drive/non-direct-drive multi-degree-of-freedom hydraulic/electric extenders. The design of extenders is different from the design of conventional robots because the extender interfaces with the human on a physical level. Two sets of force sensors measure the forces imposed on the extender by the human and by the environment (i.e., the load). The extender's compliances in response to such contact forces were designed by selecting appropriate force compensators. The stability of the system of human, extender, and object being manipulated was analyzed. A mathematical expression for the extender performance was determined to quantify the force augmentation. Experimental studies on the control and performance of the experimental extender were conducted to verify the theoretical predictions.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129864370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367716
T. Inagaki
This paper discusses responsibility allocation between human and computer, or degrees of automation, in supervisory control of large-complex systems. Strategies for responsibility allocation in emergencies are analyzed in a probabilistic manner by taking into account human's distrust on an alarm subsystem, inappropriate situation awareness, and dynamics of a controlled process under various situations. It is proven that degree of automation should not be fixed but must be changeable dynamically and flexibly depending on the situation. Criteria for setting degree of automation at an appropriate level are given. Thus obtained level for degree of automation may not satisfy the principle that "a human locus of control is required", if the principle is interpreted to the letter. That suggests the need for extending the current recognition on human supervisory control if we desire to attain or improve system safety. The situation-adaptive degree of automation is indispensable for realizing human-centered automation.<>
{"title":"Situation-adaptive degree of automation for system safety","authors":"T. Inagaki","doi":"10.1109/ROMAN.1993.367716","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367716","url":null,"abstract":"This paper discusses responsibility allocation between human and computer, or degrees of automation, in supervisory control of large-complex systems. Strategies for responsibility allocation in emergencies are analyzed in a probabilistic manner by taking into account human's distrust on an alarm subsystem, inappropriate situation awareness, and dynamics of a controlled process under various situations. It is proven that degree of automation should not be fixed but must be changeable dynamically and flexibly depending on the situation. Criteria for setting degree of automation at an appropriate level are given. Thus obtained level for degree of automation may not satisfy the principle that \"a human locus of control is required\", if the principle is interpreted to the letter. That suggests the need for extending the current recognition on human supervisory control if we desire to attain or improve system safety. The situation-adaptive degree of automation is indispensable for realizing human-centered automation.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134295110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367725
W. King
This paper describes research which was conducted to determine if humans exhibit facial expressions while interacting with a computer system. Fourteen college-aged subjects were chosen for the experiment. The subjects included 3 Hispanics and 11 Caucasians. Six of the subjects' were female. Each of these subjects performed five computer-based tasks which were chosen to simulate a wide range of typical applications; one of these tasks was a baseline. The subject's facial expressions were videotaped and later analyzed using the Ekman and Friesen Facial Action Coding System. The analysis revealed that the subjects did indeed exhibit facial expressions; an analysis of variance showed a significant difference between task types. In addition, an ethological analysis revealed a surprising number of facial expression maskings.<>
{"title":"Investigation of the exhibition of facial expressions within human computer interaction","authors":"W. King","doi":"10.1109/ROMAN.1993.367725","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367725","url":null,"abstract":"This paper describes research which was conducted to determine if humans exhibit facial expressions while interacting with a computer system. Fourteen college-aged subjects were chosen for the experiment. The subjects included 3 Hispanics and 11 Caucasians. Six of the subjects' were female. Each of these subjects performed five computer-based tasks which were chosen to simulate a wide range of typical applications; one of these tasks was a baseline. The subject's facial expressions were videotaped and later analyzed using the Ekman and Friesen Facial Action Coding System. The analysis revealed that the subjects did indeed exhibit facial expressions; an analysis of variance showed a significant difference between task types. In addition, an ethological analysis revealed a surprising number of facial expression maskings.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"603 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134327728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367724
S. Morishima, H. Harashima
This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.<>
{"title":"Emotion space for analysis and synthesis of facial expression","authors":"S. Morishima, H. Harashima","doi":"10.1109/ROMAN.1993.367724","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367724","url":null,"abstract":"This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122384767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367695
S. Shibata, K. Ohba, N. Inooka
Emotional evaluation of human arm motion models is considered by the use of semantic differential test. The models are determined by changing the velocity pattern. The human arm motion model is recreated by CRT display and by an industry robot. In order to verify the influence of the smoothness of the velocity peak position to human motions, the robot hand motions whose velocity are based on the human results and the robot hand motions whose velocity are approximated by a triangle are conducted. The results show that the smoothness does not influence the evaluation, though that the influence from the velocity peak position and maximum velocity needs to be discussed. To examine the impression given to human by the velocity peak position, the motions whose velocity has different peak position are conducted about the triangular velocity pattern. The results show that human feel the most human-like to the motion whose velocity peak is in the first half of the duration and second most to that whose velocity peak is in the middle. However, human does not feel human-likeness to that in the second half of the duration. In order to examine the influence of maximum velocity, the motions whose velocity peak is in the first half of the duration are conducted under different maximum velocities. The results show that there exists proper maximum velocity which makes us feel human-likeness the most.<>
{"title":"Emotional evaluation of human arm motion models","authors":"S. Shibata, K. Ohba, N. Inooka","doi":"10.1109/ROMAN.1993.367695","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367695","url":null,"abstract":"Emotional evaluation of human arm motion models is considered by the use of semantic differential test. The models are determined by changing the velocity pattern. The human arm motion model is recreated by CRT display and by an industry robot. In order to verify the influence of the smoothness of the velocity peak position to human motions, the robot hand motions whose velocity are based on the human results and the robot hand motions whose velocity are approximated by a triangle are conducted. The results show that the smoothness does not influence the evaluation, though that the influence from the velocity peak position and maximum velocity needs to be discussed. To examine the impression given to human by the velocity peak position, the motions whose velocity has different peak position are conducted about the triangular velocity pattern. The results show that human feel the most human-like to the motion whose velocity peak is in the first half of the duration and second most to that whose velocity peak is in the middle. However, human does not feel human-likeness to that in the second half of the duration. In order to examine the influence of maximum velocity, the motions whose velocity peak is in the first half of the duration are conducted under different maximum velocities. The results show that there exists proper maximum velocity which makes us feel human-likeness the most.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115264353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367750
S. Kita
A quantitative analysis was made to clarify the ability of human observers to detect a line of a unique orientation against a background of lines of a different orientation. Results of a psychophysical experiment indicated that the performance of correct judgments increased with the number of adjacent lines under the control of lateral masking effects. A model for decision making was formulated to simulate the results. The consistence between the experiment and the simulation suggested; (1) the human visual system uses differences between adjacent lines of orientations, (2) it integrates the orientation differences into a global percept by the Bayesian inference rule.<>
{"title":"Alphabet and grammar in visual search","authors":"S. Kita","doi":"10.1109/ROMAN.1993.367750","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367750","url":null,"abstract":"A quantitative analysis was made to clarify the ability of human observers to detect a line of a unique orientation against a background of lines of a different orientation. Results of a psychophysical experiment indicated that the performance of correct judgments increased with the number of adjacent lines under the control of lateral masking effects. A model for decision making was formulated to simulate the results. The consistence between the experiment and the simulation suggested; (1) the human visual system uses differences between adjacent lines of orientations, (2) it integrates the orientation differences into a global percept by the Bayesian inference rule.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124003196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}