Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367721
Y. Hayakawa, S. Kawamura
In the case of pneumatic bellows actuators, the external forces are easily and exactly measured from the pressure in the chamber of bellows because there is no sliding part which generates friction. Therefore, the bellows can be utilized as a force sensing actuator which works as a sensor and an actuator simultaneously. By using the pneumatic bellows we design one joint of a robot manipulator which is activated by the two bellows actuators antagonistically. From some experimental results we disclose the static and dynamic characteristics of the proposed robot manipulator and confirm that the robot can move flexibly following external forces by making use of the force sensing ability.<>
{"title":"A pneumatic bellows manipulator with force sensing ability","authors":"Y. Hayakawa, S. Kawamura","doi":"10.1109/ROMAN.1993.367721","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367721","url":null,"abstract":"In the case of pneumatic bellows actuators, the external forces are easily and exactly measured from the pressure in the chamber of bellows because there is no sliding part which generates friction. Therefore, the bellows can be utilized as a force sensing actuator which works as a sensor and an actuator simultaneously. By using the pneumatic bellows we design one joint of a robot manipulator which is activated by the two bellows actuators antagonistically. From some experimental results we disclose the static and dynamic characteristics of the proposed robot manipulator and confirm that the robot can move flexibly following external forces by making use of the force sensing ability.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116037000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367692
Michael Cohen, S. Aoki, N. Koizumi
Augmented audio reality consists of hybrid presentations in which computer-generated sounds are overlayed on top of more directly acquired audio signals. We are exploring the alignability of binaural signals with artificially spatialized sources, synthesized by convolving monaural signals with left/right pairs of directional transfer functions. We use MAW (multidimensional audio windows), a NeXT-based system, as a binaural directional mixing console. Since the rearrangement of a dynamic map is used to dynamically select transfer functions, a user may specify the virtual location of a sound source, throwing the source into perceptual space, using exocentric graphical control to drive egocentric auditory display. As a concept demonstration, we muted a telephone, and then used MAW to spatialize a ringing signal at its location, putting the sonic image of the phone into the office environment. By juxtaposing and mixing 'real' and 'synthetic' audio transmissions, we are exploring the relationship between acoustic telepresence and VR presentations: telepresence manifests as the actual configuration of sources in a sound field, as perceivable by a dummyhead; VR is the perception yielded by filtering of virtual sources with respect to virtual sinks. We have conducted an experiment testing the usefulness of such a hybrid.<>
{"title":"Augmented audio reality: telepresence/VR hybrid acoustic environments","authors":"Michael Cohen, S. Aoki, N. Koizumi","doi":"10.1109/ROMAN.1993.367692","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367692","url":null,"abstract":"Augmented audio reality consists of hybrid presentations in which computer-generated sounds are overlayed on top of more directly acquired audio signals. We are exploring the alignability of binaural signals with artificially spatialized sources, synthesized by convolving monaural signals with left/right pairs of directional transfer functions. We use MAW (multidimensional audio windows), a NeXT-based system, as a binaural directional mixing console. Since the rearrangement of a dynamic map is used to dynamically select transfer functions, a user may specify the virtual location of a sound source, throwing the source into perceptual space, using exocentric graphical control to drive egocentric auditory display. As a concept demonstration, we muted a telephone, and then used MAW to spatialize a ringing signal at its location, putting the sonic image of the phone into the office environment. By juxtaposing and mixing 'real' and 'synthetic' audio transmissions, we are exploring the relationship between acoustic telepresence and VR presentations: telepresence manifests as the actual configuration of sources in a sound field, as perceivable by a dummyhead; VR is the perception yielded by filtering of virtual sources with respect to virtual sinks. We have conducted an experiment testing the usefulness of such a hybrid.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"520 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367680
D. Preston
Many educational establishments face the challenge of increasing staff-student ratio allied to student demand for greater flexibility. Computer aided learning, educational computer aided software engineering (CASE) and group decision support systems (GDSS) have valuable roles to play in fulfilling student expectation whilst optimising use of human resources. Increasingly institutions are considering wider and alternative forms of delivery, such as interactive video (IV). The author highlights the many considerations necessary before utilising IV as a form of course delivery. In addition the author classifies the range of products available, thus providing background for any institution with ideas of introducing such media.<>
{"title":"Alternative forms of delivery within engineering education","authors":"D. Preston","doi":"10.1109/ROMAN.1993.367680","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367680","url":null,"abstract":"Many educational establishments face the challenge of increasing staff-student ratio allied to student demand for greater flexibility. Computer aided learning, educational computer aided software engineering (CASE) and group decision support systems (GDSS) have valuable roles to play in fulfilling student expectation whilst optimising use of human resources. Increasingly institutions are considering wider and alternative forms of delivery, such as interactive video (IV). The author highlights the many considerations necessary before utilising IV as a form of course delivery. In addition the author classifies the range of products available, thus providing background for any institution with ideas of introducing such media.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114772610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367675
M. Takahashi, O. Kubo, H. Yoshikawa
The basic concept of the mutual adaptive interface (MADI), which has two outstanding features for overcoming difficulties in adaptation is proposed. The first one is that the proposed adaptive interface utilizes the information about the human estimated through the physiological measures. The other point is that the proposed interface covers the extensive range of the man-machine interaction with the aid of the feedback controller which determines the way of adaptation.<>
{"title":"Mutual adaptive interface: basic concept","authors":"M. Takahashi, O. Kubo, H. Yoshikawa","doi":"10.1109/ROMAN.1993.367675","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367675","url":null,"abstract":"The basic concept of the mutual adaptive interface (MADI), which has two outstanding features for overcoming difficulties in adaptation is proposed. The first one is that the proposed adaptive interface utilizes the information about the human estimated through the physiological measures. The other point is that the proposed interface covers the extensive range of the man-machine interaction with the aid of the feedback controller which determines the way of adaptation.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126892375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367719
Y. Kuni, M. Buss, H. Hashimoto
In this paper we propose a dynamic force simulator (DFS) for force feedback in human-machine systems. The DFS simulates object dynamics, contact model and friction characteristics of the human hand interacting with objects in a virtual reality and aims at the human skill acquisition as a first step of the previously proposed intelligent assisting system (IAS). After derivation of the kinematic and force relations between hand and object space we propose a method of realizing desired feedback forces to the human operator. Interaction with the DFS allows the calculation and feedback of appropriate forces to the force controlled actuators of the sensor glove we have developed.<>
{"title":"Force flow between human and object in virtual world","authors":"Y. Kuni, M. Buss, H. Hashimoto","doi":"10.1109/ROMAN.1993.367719","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367719","url":null,"abstract":"In this paper we propose a dynamic force simulator (DFS) for force feedback in human-machine systems. The DFS simulates object dynamics, contact model and friction characteristics of the human hand interacting with objects in a virtual reality and aims at the human skill acquisition as a first step of the previously proposed intelligent assisting system (IAS). After derivation of the kinematic and force relations between hand and object space we propose a method of realizing desired feedback forces to the human operator. Interaction with the DFS allows the calculation and feedback of appropriate forces to the force controlled actuators of the sensor glove we have developed.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"42 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367729
M. Mitsuishi, T. Hori, T. Nagao
With the development of virtual reality, tele-existence and remote collaboration technologies, it has become possible for a human being to operate a machine and handle objects in worlds that are in remote locations, vastly different in scale from the human world, or in which the governing physical laws are different from those in the normal human world. To develop such a system, it is necessary to create an environment in which an operator feels as if he/she were in the world where the remote machine actually exists by transmitting a readily perceivable impression of that world. In this paper, the authors use the machining center in the remote machine site for a telehandling/machining system and propose a method of predictive force display for telemachining. Specifically, a force smoothing method and the method of identification of the working environment for predictive force display are proposed.<>
{"title":"Predictive force display for tele-handling/machining system","authors":"M. Mitsuishi, T. Hori, T. Nagao","doi":"10.1109/ROMAN.1993.367729","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367729","url":null,"abstract":"With the development of virtual reality, tele-existence and remote collaboration technologies, it has become possible for a human being to operate a machine and handle objects in worlds that are in remote locations, vastly different in scale from the human world, or in which the governing physical laws are different from those in the normal human world. To develop such a system, it is necessary to create an environment in which an operator feels as if he/she were in the world where the remote machine actually exists by transmitting a readily perceivable impression of that world. In this paper, the authors use the machining center in the remote machine site for a telehandling/machining system and propose a method of predictive force display for telemachining. Specifically, a force smoothing method and the method of identification of the working environment for predictive force display are proposed.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131364465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367742
M. Pecson, K. Ito, Zhiwei Luo, A. Kato, T. Aoyama, M. Ito
The capability of a prosthetic device to mimic the response of the actual limb with respect to voluntary motor commands and to environmental loads should be addressed. This paper discusses the compliance control of an ultrasonic motor powered prosthetic forearm which utilizes cutaneously measured electromyogram (EMG) signals sensed with electrodes over the muscles as means of detecting motor commands sent by the central nervous system (CNS). Compliance control of the artificial limb was studied by implementing the bilinear model of the forearm and hand. This model emphasizes the role of the visco-elastic properties of the musculo-skeletal system of the actual limb in controlling its net configuration and movement. The flexor and extensor muscles extending over a joint influence the overall joint impedance and determines the equilibrium position of the joint. Relaxing both flexor and extensor muscles makes the joint compliant to external forces, while activating both muscles increases the impedance of the joint.<>
{"title":"Compliance control of an ultrasonic motor powered prosthetic forearm","authors":"M. Pecson, K. Ito, Zhiwei Luo, A. Kato, T. Aoyama, M. Ito","doi":"10.1109/ROMAN.1993.367742","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367742","url":null,"abstract":"The capability of a prosthetic device to mimic the response of the actual limb with respect to voluntary motor commands and to environmental loads should be addressed. This paper discusses the compliance control of an ultrasonic motor powered prosthetic forearm which utilizes cutaneously measured electromyogram (EMG) signals sensed with electrodes over the muscles as means of detecting motor commands sent by the central nervous system (CNS). Compliance control of the artificial limb was studied by implementing the bilinear model of the forearm and hand. This model emphasizes the role of the visco-elastic properties of the musculo-skeletal system of the actual limb in controlling its net configuration and movement. The flexor and extensor muscles extending over a joint influence the overall joint impedance and determines the equilibrium position of the joint. Relaxing both flexor and extensor muscles makes the joint compliant to external forces, while activating both muscles increases the impedance of the joint.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130390241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367740
R. Osu
Trajectory formation during human multi-joint arm movements was investigated. Experiments showed the trajectory of unconstrained sequential movements (elliptical drawing movements) in a horizontal plane is heavily influenced by differences in body coordinates. Therefore it is suggested that trajectory planning in such movements is not solely based on the task-oriented visual coordinates but dependent on complicated dynamics within the musculoskeletal system of the human arm.<>
{"title":"Coordinates for trajectory formation of human multi-joint arm movement","authors":"R. Osu","doi":"10.1109/ROMAN.1993.367740","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367740","url":null,"abstract":"Trajectory formation during human multi-joint arm movements was investigated. Experiments showed the trajectory of unconstrained sequential movements (elliptical drawing movements) in a horizontal plane is heavily influenced by differences in body coordinates. Therefore it is suggested that trajectory planning in such movements is not solely based on the task-oriented visual coordinates but dependent on complicated dynamics within the musculoskeletal system of the human arm.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124654096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367723
T. Sakaguchi, M. Ueno, S. Morishima, H. Harashima
We propose the method to extract facial expression parameter with quantitative analysis of 9-dimensional movement of facial surface that arises from real human expression, using high-definition model. By dividing the face surface to some regions having similar characteristics of movement and modeling with 3-dimensional round surface in each region respectively, we can derive the muscle control parameters and the rule of movement. Considering the findings of this analysis we propose also a synthesis method of real facial expression image based on this high-definition model.<>
{"title":"Analysis and synthesis of facial expression using high-definition wire frame model","authors":"T. Sakaguchi, M. Ueno, S. Morishima, H. Harashima","doi":"10.1109/ROMAN.1993.367723","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367723","url":null,"abstract":"We propose the method to extract facial expression parameter with quantitative analysis of 9-dimensional movement of facial surface that arises from real human expression, using high-definition model. By dividing the face surface to some regions having similar characteristics of movement and modeling with 3-dimensional round surface in each region respectively, we can derive the muscle control parameters and the rule of movement. Considering the findings of this analysis we propose also a synthesis method of real facial expression image based on this high-definition model.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129413303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-11-03DOI: 10.1109/ROMAN.1993.367722
H. Yoshida, T. Toshima
The present study is designed to investigate the selectivity of perceptual and memory processes to the various spatial frequencies. Subjects were asked to rate the similarity of bandpass filtered face stimulus, comparing with the original one. In one case, two stimuli were presented simultaneously side by side (perceptual condition), and in the other case, they were presented successively (memory condition). The results showed that the peak frequency which had been rated the most representative of the original face was 26.9 cycles/image in both perceptual and memory conditions, and there were no significant differences in rated similarity between the two conditions throughout all spatial frequencies. Since linguistic memory might help subjects to retain the details of face, another session which required them to do a linguistic parallel task was carried out. The interference task reduced the similarity scores in the memory condition. However, the effect was consistent across all frequencies, and the peak frequency did not shift at all. Thus, it was suggested that the spatial frequency characteristics of face recognition are probably due to the one of perceptual processing of visual information rather than the visual memory.<>
{"title":"Recognition of band-pass filtered facial images: a comparison between perceptual and memory processes","authors":"H. Yoshida, T. Toshima","doi":"10.1109/ROMAN.1993.367722","DOIUrl":"https://doi.org/10.1109/ROMAN.1993.367722","url":null,"abstract":"The present study is designed to investigate the selectivity of perceptual and memory processes to the various spatial frequencies. Subjects were asked to rate the similarity of bandpass filtered face stimulus, comparing with the original one. In one case, two stimuli were presented simultaneously side by side (perceptual condition), and in the other case, they were presented successively (memory condition). The results showed that the peak frequency which had been rated the most representative of the original face was 26.9 cycles/image in both perceptual and memory conditions, and there were no significant differences in rated similarity between the two conditions throughout all spatial frequencies. Since linguistic memory might help subjects to retain the details of face, another session which required them to do a linguistic parallel task was carried out. The interference task reduced the similarity scores in the memory condition. However, the effect was consistent across all frequencies, and the peak frequency did not shift at all. Thus, it was suggested that the spatial frequency characteristics of face recognition are probably due to the one of perceptual processing of visual information rather than the visual memory.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114154414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}