Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367748
N. Osaka
Normal subjects were asked to read a Japanese text with a visual mask patch (simulated central scotoma) which obliterated foveal visual field and moved in asynchrony with eye during reading. When foveal visual field was masked horizontally with 4-8 character spaces reading became difficult. The results of a simulated central scotoma experiment indicate the importance of foveal and parafoveal vision during Japanese text reading.<>
{"title":"Reading with a simulated central scotoma","authors":"N. Osaka","doi":"10.1109/ROMAN.1993.367748","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367748","url":null,"abstract":"Normal subjects were asked to read a Japanese text with a visual mask patch (simulated central scotoma) which obliterated foveal visual field and moved in asynchrony with eye during reading. When foveal visual field was masked horizontally with 4-8 character spaces reading became difficult. The results of a simulated central scotoma experiment indicate the importance of foveal and parafoveal vision during Japanese text reading.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115378091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367688
M. Yoda, Y. Shiota
The objective of this study was to develop a robot which can play game with human. The robot developed is called GAME-ROBOT. GAME-ROBOT is required to make a response which is "amusing" or "interesting" when playing game with human. It means that we cannot neglect such factors as safety and psychological effects on the human being. For GAME-ROBOT, the minimum required functions for playing Othello are shown as follows: 1) a function for detecting piece situations on the game board by image processing; 2) a function to decide the next robot move by the game software; 3) a function for handling a piece through the robot mechanisms; and 4) a function for detecting the human behaviour by external sensors attached to the robot to avoid an unexpected contact of human with the robot.<>
{"title":"GAME-ROBOT prepared for avoidance motion","authors":"M. Yoda, Y. Shiota","doi":"10.1109/ROMAN.1993.367688","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367688","url":null,"abstract":"The objective of this study was to develop a robot which can play game with human. The robot developed is called GAME-ROBOT. GAME-ROBOT is required to make a response which is \"amusing\" or \"interesting\" when playing game with human. It means that we cannot neglect such factors as safety and psychological effects on the human being. For GAME-ROBOT, the minimum required functions for playing Othello are shown as follows: 1) a function for detecting piece situations on the game board by image processing; 2) a function to decide the next robot move by the game software; 3) a function for handling a piece through the robot mechanisms; and 4) a function for detecting the human behaviour by external sensors attached to the robot to avoid an unexpected contact of human with the robot.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116783052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367701
H. Ogata, T. Takahashi
Our study is on a task description for recognizing a teaching task and applying the task in a different environment. The task is described by a sequence of task states defined for areas generated by segmenting the configuration space. The teaching motion taught by the operator is transferred into a sequence of task states based on the teaching environment. When executing the task, the system observes the task environment, maps the task states to those of the environment, and generates a new task motion. This paper introduces a general description for robotic assembly task that is applicable to complex environments.<>
{"title":"An approach to task understanding and playback toward complex environments","authors":"H. Ogata, T. Takahashi","doi":"10.1109/ROMAN.1993.367701","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367701","url":null,"abstract":"Our study is on a task description for recognizing a teaching task and applying the task in a different environment. The task is described by a sequence of task states defined for areas generated by segmenting the configuration space. The teaching motion taught by the operator is transferred into a sequence of task states based on the teaching environment. When executing the task, the system observes the task environment, maps the task states to those of the environment, and generates a new task motion. This paper introduces a general description for robotic assembly task that is applicable to complex environments.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127520547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367679
M. Kobayashi, I. Siio
We have implemented a real-time desktop conferencing system, where many users share applications along with voice data. One of its unique features is its user interface, named the virtual conference room. Each room, shown in a window on the PC monitor, represents a conference status, and each participant is represented as an animation character, called an agent. The conference management, such as floor passing, is executed through direct manipulation on agents. Voices are intermixed so that they reflect the positions and status of agents within the room. The virtual conference room has achieved features suitable for multi-user conferencing systems, such as visualization of the conference status, unified floor control and dynamic subgrouping of participants.<>
{"title":"Virtual conference room: a metaphor for multi-user real-time conferencing systems","authors":"M. Kobayashi, I. Siio","doi":"10.1109/ROMAN.1993.367679","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367679","url":null,"abstract":"We have implemented a real-time desktop conferencing system, where many users share applications along with voice data. One of its unique features is its user interface, named the virtual conference room. Each room, shown in a window on the PC monitor, represents a conference status, and each participant is represented as an animation character, called an agent. The conference management, such as floor passing, is executed through direct manipulation on agents. Voices are intermixed so that they reflect the positions and status of agents within the room. The virtual conference room has achieved features suitable for multi-user conferencing systems, such as visualization of the conference status, unified floor control and dynamic subgrouping of participants.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123775835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367709
H. Kobayashi, F. Hara, S. Ikeda, H. Yamada
In order to develop an active human interface (AHI) for heart-to-heart communication between machine and human being, we've been investigating the machine recognition of human emotions from facial expressions. This paper aims at clarifying the difference between static and dynamic recognition of facial expressions and investigating the characteristics of their dynamic recognition. In the first place, we obtain the 11 facial images for each of 6 basic expressions, sequentially changing from neutral to one of basic expressions (apex). Then, for comparison, we undertake static visual recognition tests by showing facial images to subjects, and then, by using the sequentially changing images, we also perform dynamic recognition tests. The comparison between static and dynamic recognition results reveals no difference. We further prepare two types of sequential facial images; time-wise sequential images changing from neutral to apex (UP) and from apex to neutral (DOWN). The results of dynamic recognition tests by using UP and DOWN sequential images reveal the fact that the recognition point in changing facial expression differs between the UP and DOWN time-sequential face image data, i.e., there exists a hysteresis in human dynamic recognition of facial expressions.<>
{"title":"A basic study of dynamic recognition of human facial expressions","authors":"H. Kobayashi, F. Hara, S. Ikeda, H. Yamada","doi":"10.1109/ROMAN.1993.367709","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367709","url":null,"abstract":"In order to develop an active human interface (AHI) for heart-to-heart communication between machine and human being, we've been investigating the machine recognition of human emotions from facial expressions. This paper aims at clarifying the difference between static and dynamic recognition of facial expressions and investigating the characteristics of their dynamic recognition. In the first place, we obtain the 11 facial images for each of 6 basic expressions, sequentially changing from neutral to one of basic expressions (apex). Then, for comparison, we undertake static visual recognition tests by showing facial images to subjects, and then, by using the sequentially changing images, we also perform dynamic recognition tests. The comparison between static and dynamic recognition results reveals no difference. We further prepare two types of sequential facial images; time-wise sequential images changing from neutral to apex (UP) and from apex to neutral (DOWN). The results of dynamic recognition tests by using UP and DOWN sequential images reveal the fact that the recognition point in changing facial expression differs between the UP and DOWN time-sequential face image data, i.e., there exists a hysteresis in human dynamic recognition of facial expressions.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115195788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367706
K. Iwamoto, Kazuo Tanie, Taro Maeda, Kouji Ichie, Manabu Yasukawa, C. Horiguchi
This paper describes a head mounted display (HMD) system which can present realistic visual images of the environment. There are several types of commercial HMDs for virtual reality systems. One of the problems of their systems is that the screen size is small in order to keep the image resolution high. Therefore, the screen frame and displayed images overlap and impair the sense of reality. In order to improve it, a new type of HMD is proposed. The idea is based on superimposing small, high resolution images on wide, low resolution images. The high resolution images are presented only near the center of human retina, according to the motion of eyeball detected. In this paper, the design of the optical system and image control system is explained. Also, some preliminary evaluation experiments using a prototype system are introduced to demonstrate the effectiveness of the proposed idea.<>
{"title":"Development of an eye movement tracking type dead mounted display: system proposal and evaluation experiments","authors":"K. Iwamoto, Kazuo Tanie, Taro Maeda, Kouji Ichie, Manabu Yasukawa, C. Horiguchi","doi":"10.1109/ROMAN.1993.367706","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367706","url":null,"abstract":"This paper describes a head mounted display (HMD) system which can present realistic visual images of the environment. There are several types of commercial HMDs for virtual reality systems. One of the problems of their systems is that the screen size is small in order to keep the image resolution high. Therefore, the screen frame and displayed images overlap and impair the sense of reality. In order to improve it, a new type of HMD is proposed. The idea is based on superimposing small, high resolution images on wide, low resolution images. The high resolution images are presented only near the center of human retina, according to the motion of eyeball detected. In this paper, the design of the optical system and image control system is explained. Also, some preliminary evaluation experiments using a prototype system are introduced to demonstrate the effectiveness of the proposed idea.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122573247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367690
A. Yoshida, Y. Hagita, K. Yamazaki, T. Yamaguchi
The "Hyper Hospital" is a novel medical care system which will be constructed on an electronic information network. The human interface of the Hyper Hospital based on the modern virtual reality technology is expected to maximally enhance patients' ability of healing illness by computer-supported online visual consultations. In order to investigate the effects and features of online visual consultations in the Hyper Hospital, we conducted an experiment to clarify the influence of electronic interviews on the talking behavior of interviewees in the similar context to real doctor-patient interactions. Four types of distant-confrontation interviews were presented to voluntary subjects and their verbal and nonverbal responses were analyzed from the human ethological viewpoints. In the media-mediated interviews both the latency and the duration of interviewees' utterances in answering questions increased compared with those of live face to face interviews. These results suggest that the interviewee became more verbose or talkative in the mediated interviews, but his psychological tension was generally augmented.<>
{"title":"Which do you feel comfortable, interview by a real doctor or by a virtual doctor? A comparative study of responses to inquiries with various psychological intensities, for the development of the Hyper Hospital","authors":"A. Yoshida, Y. Hagita, K. Yamazaki, T. Yamaguchi","doi":"10.1109/ROMAN.1993.367690","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367690","url":null,"abstract":"The \"Hyper Hospital\" is a novel medical care system which will be constructed on an electronic information network. The human interface of the Hyper Hospital based on the modern virtual reality technology is expected to maximally enhance patients' ability of healing illness by computer-supported online visual consultations. In order to investigate the effects and features of online visual consultations in the Hyper Hospital, we conducted an experiment to clarify the influence of electronic interviews on the talking behavior of interviewees in the similar context to real doctor-patient interactions. Four types of distant-confrontation interviews were presented to voluntary subjects and their verbal and nonverbal responses were analyzed from the human ethological viewpoints. In the media-mediated interviews both the latency and the duration of interviewees' utterances in answering questions increased compared with those of live face to face interviews. These results suggest that the interviewee became more verbose or talkative in the mediated interviews, but his psychological tension was generally augmented.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122596193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367693
R. Ikeura, M. Kimura, H. Inooka
This paper describes motion planning of computer controlled automata (CCA). We consider two steps for the motion planning; (1) Create rough patterns of motion. (2) Modify the patterns iteratively and create a desired motion. A real time planning method for a robot task, which has been developed by authors, is applied as the first step. Using this method, a human operator can generate a trajectory of the CCA while monitoring the actual motion by operating a joystick. For accurately modifying inadequate motion in the second step, a CAD system is developed. Tools for the CAD system are described and, then, the effectiveness is shown in an experimental example.<>
{"title":"Motion planning of computer controlled automata","authors":"R. Ikeura, M. Kimura, H. Inooka","doi":"10.1109/ROMAN.1993.367693","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367693","url":null,"abstract":"This paper describes motion planning of computer controlled automata (CCA). We consider two steps for the motion planning; (1) Create rough patterns of motion. (2) Modify the patterns iteratively and create a desired motion. A real time planning method for a robot task, which has been developed by authors, is applied as the first step. Using this method, a human operator can generate a trajectory of the CCA while monitoring the actual motion by operating a joystick. For accurately modifying inadequate motion in the second step, a CAD system is developed. Tools for the CAD system are described and, then, the effectiveness is shown in an experimental example.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129777656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367708
H. Kobayashi, F. Hara
In order to develop an active human interface (AHI) that realizes heart-to-heart communication between intelligent machine and human being, we have been undertaking the investigation of the method for improving the sensitivity of "KANSEI" communication between intelligent machine and human being. This paper deals with the mechanical aspects of "Face Robot" that produces facial expressions in order to express the artificial emotions as in human being. We select the flexible microactuator (FMA) driven by air pressure for the sake of moving the control points of face robot corresponding to action units (AUs), and then we design and construct a 3 dimensional human-face-like robot. Then we undertake recognition test by showing the face images of 6 basic facial expressions expressed on the face robot, and the results show us that the correct recognition ratio accomplishes 83.3% for 6 basic facial expressions. This high ratio indicates the possibility of the face robot as a "KANSEI" communication media between intelligent machine and human being.<>
{"title":"Study on face robot for active human interface-mechanisms of face robot and expression of 6 basic facial expressions","authors":"H. Kobayashi, F. Hara","doi":"10.1109/ROMAN.1993.367708","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367708","url":null,"abstract":"In order to develop an active human interface (AHI) that realizes heart-to-heart communication between intelligent machine and human being, we have been undertaking the investigation of the method for improving the sensitivity of \"KANSEI\" communication between intelligent machine and human being. This paper deals with the mechanical aspects of \"Face Robot\" that produces facial expressions in order to express the artificial emotions as in human being. We select the flexible microactuator (FMA) driven by air pressure for the sake of moving the control points of face robot corresponding to action units (AUs), and then we design and construct a 3 dimensional human-face-like robot. Then we undertake recognition test by showing the face images of 6 basic facial expressions expressed on the face robot, and the results show us that the correct recognition ratio accomplishes 83.3% for 6 basic facial expressions. This high ratio indicates the possibility of the face robot as a \"KANSEI\" communication media between intelligent machine and human being.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129044694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367754
N. Frijda, D. Moffat
This paper sketches a psychological model of emotions. Emotions are regarded as provisions for signalling the relevance of events for the major goals or concerns of the individual, and for modifying action readiness in a way that corresponds with the appraisal of the events. Different emotions correspond both with different appraisals, and with different modes of change in action readiness. Emotions being interpreted as functional provisions, the model lends itself to incorporation into artificial systems. The modes of emotional action readiness include readiness for actions that influence social interactants, for the benefit of resolving the issues posed by relevant events. This implies that emotional systems recognize the corresponding states of readiness. Emotion communication therefore has its roots in elementary aspects of the emotion process itself.<>
{"title":"A model of emotions and emotion communication","authors":"N. Frijda, D. Moffat","doi":"10.1109/ROMAN.1993.367754","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367754","url":null,"abstract":"This paper sketches a psychological model of emotions. Emotions are regarded as provisions for signalling the relevance of events for the major goals or concerns of the individual, and for modifying action readiness in a way that corresponds with the appraisal of the events. Different emotions correspond both with different appraisals, and with different modes of change in action readiness. Emotions being interpreted as functional provisions, the model lends itself to incorporation into artificial systems. The modes of emotional action readiness include readiness for actions that influence social interactants, for the benefit of resolving the issues posed by relevant events. This implies that emotional systems recognize the corresponding states of readiness. Emotion communication therefore has its roots in elementary aspects of the emotion process itself.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"31 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120887575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}