Shunji Uchino, N. Abe, Hiroshi Takada, T. Yagi, H. Taki, Shoujie He
{"title":"Virtual Reality Interaction System between User and Avatar with Distributed Processing","authors":"Shunji Uchino, N. Abe, Hiroshi Takada, T. Yagi, H. Taki, Shoujie He","doi":"10.1109/WAINA.2008.124","DOIUrl":null,"url":null,"abstract":"In this research, a dialog environment between human and virtual agent has been constructed. With the commercial off-the-shelf VR technologies, special devices such as data glove have to be used for the interaction. But this is difficult to manipulate objects. If there is a helper who has direct access to objects in a virtual space, we may ask him. The question, however, is how to communicate with the helper. This paper presents a solution to the question. The basic idea is to utilize speech recognition and gesture recognition systems. Experimental results have proved the effectiveness of the approach in terms of facilitating man-machine interaction and communication. The environment constructed in this research allows a user to communicate by talking and showing gestures to a personified agent in virtual environment. A user can use his/her finger to point at a virtual object and ask the agent to manipulate the virtual object.","PeriodicalId":170418,"journal":{"name":"22nd International Conference on Advanced Information Networking and Applications - Workshops (aina workshops 2008)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"22nd International Conference on Advanced Information Networking and Applications - Workshops (aina workshops 2008)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WAINA.2008.124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this research, a dialog environment between human and virtual agent has been constructed. With the commercial off-the-shelf VR technologies, special devices such as data glove have to be used for the interaction. But this is difficult to manipulate objects. If there is a helper who has direct access to objects in a virtual space, we may ask him. The question, however, is how to communicate with the helper. This paper presents a solution to the question. The basic idea is to utilize speech recognition and gesture recognition systems. Experimental results have proved the effectiveness of the approach in terms of facilitating man-machine interaction and communication. The environment constructed in this research allows a user to communicate by talking and showing gestures to a personified agent in virtual environment. A user can use his/her finger to point at a virtual object and ask the agent to manipulate the virtual object.