Myungho Lee, Nahal Norouzi, G. Bruder, P. Wisniewski, G. Welch
In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.
{"title":"The physical-virtual table: exploring the effects of a virtual human's physical influence on social interaction","authors":"Myungho Lee, Nahal Norouzi, G. Bruder, P. Wisniewski, G. Welch","doi":"10.1145/3281505.3281533","DOIUrl":"https://doi.org/10.1145/3281505.3281533","url":null,"abstract":"In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shotaro Ichikawa, Kazuki Takashima, Anthony Tang, Y. Kitamura
We present a concept-based world building approach, realized in a system called VR Safari Park, which allows users to rapidly create and manipulate a world simulation. Conventional world building tools focus on the manipulation and arrangement of entities to set up the simulation, which is time consuming as it requires frequent view and entity manipulations. Our approach focuses on a far simpler mechanic, where users add virtual blocks which represent world entities (e.g. animals, terrain, weather, etc.) to a World Tree, which represents the simulation. In so doing, the World Tree provides a quick overview of the simulation, and users can easily set up scenarios in the simulation without having to manually perform fine-grain manipulations on world entities. A preliminary user study found that the proposed interface is effective and usable for novice users without prior immersive VR experience.
{"title":"VR safari park: a concept-based world building interface using blocks and world tree","authors":"Shotaro Ichikawa, Kazuki Takashima, Anthony Tang, Y. Kitamura","doi":"10.1145/3281505.3281517","DOIUrl":"https://doi.org/10.1145/3281505.3281517","url":null,"abstract":"We present a concept-based world building approach, realized in a system called VR Safari Park, which allows users to rapidly create and manipulate a world simulation. Conventional world building tools focus on the manipulation and arrangement of entities to set up the simulation, which is time consuming as it requires frequent view and entity manipulations. Our approach focuses on a far simpler mechanic, where users add virtual blocks which represent world entities (e.g. animals, terrain, weather, etc.) to a World Tree, which represents the simulation. In so doing, the World Tree provides a quick overview of the simulation, and users can easily set up scenarios in the simulation without having to manually perform fine-grain manipulations on world entities. A preliminary user study found that the proposed interface is effective and usable for novice users without prior immersive VR experience.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131143486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying-Li Lin, Tsai-Yi Chou, Yu-Cheng Lieo, Yu-Cheng Huang, Ping-Hsuan Han
When people eat, the taste is very complex and be influenced easily by other senses. Such as visual, olfactory, and haptic, even past experiences, can affect the human perception, which in turn creates more taste possibilities. We present TransFork, an eating tool with olfactory feedback, which augments the tasting experience with video see-through head-mounted display. Additionally, we design a recipe via preliminary experiments to find out the taste conversion formula, which could enhance the flavor of foods and change the user perception to recognize the food. In this demonstration, we prepare a mini feast with bite-sized fruit, the participants use the TransFork to eat food A and smell the scent of food B stored at the aromatic box via airflow guiding. Before they deliver the food to their mouth, the head-mounted display augmented the color of food B on food A by the QR code on the aromatic box. With this augmented reality techniques and the recipe, the tasting experience could be augmented or enhanced, which is a potential approach and could be a playful used for eating.
{"title":"TransFork","authors":"Ying-Li Lin, Tsai-Yi Chou, Yu-Cheng Lieo, Yu-Cheng Huang, Ping-Hsuan Han","doi":"10.1145/3281505.3281560","DOIUrl":"https://doi.org/10.1145/3281505.3281560","url":null,"abstract":"When people eat, the taste is very complex and be influenced easily by other senses. Such as visual, olfactory, and haptic, even past experiences, can affect the human perception, which in turn creates more taste possibilities. We present TransFork, an eating tool with olfactory feedback, which augments the tasting experience with video see-through head-mounted display. Additionally, we design a recipe via preliminary experiments to find out the taste conversion formula, which could enhance the flavor of foods and change the user perception to recognize the food. In this demonstration, we prepare a mini feast with bite-sized fruit, the participants use the TransFork to eat food A and smell the scent of food B stored at the aromatic box via airflow guiding. Before they deliver the food to their mouth, the head-mounted display augmented the color of food B on food A by the QR code on the aromatic box. With this augmented reality techniques and the recipe, the tasting experience could be augmented or enhanced, which is a potential approach and could be a playful used for eating.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127240794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a low-cost omni-directional VR walking platform by thigh supporting and motion estimation. Specifically, this platform supports the thighs of the user to the walking direction, and the user make the stepping motion while leaning to the walking direction. Thereby making it possible to change the center of gravity of the foot sole like an actual walking. Moreover, our platform estimate the foot movement which constrained by thigh supporting part with load cells around the user's thigh, and render to the HMD according to the estimated foot movement. As a result, our platform enables user to make the walking sensation more realistic at low-cost.
{"title":"A low-cost omni-directional VR walking platform by thigh supporting and motion estimation","authors":"Wataru Wakita, Tomoyuki Takano, Toshiyuki Hadama","doi":"10.1145/3281505.3281570","DOIUrl":"https://doi.org/10.1145/3281505.3281570","url":null,"abstract":"We propose a low-cost omni-directional VR walking platform by thigh supporting and motion estimation. Specifically, this platform supports the thighs of the user to the walking direction, and the user make the stepping motion while leaning to the walking direction. Thereby making it possible to change the center of gravity of the foot sole like an actual walking. Moreover, our platform estimate the foot movement which constrained by thigh supporting part with load cells around the user's thigh, and render to the HMD according to the estimated foot movement. As a result, our platform enables user to make the walking sensation more realistic at low-cost.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126732534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.
{"title":"Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment","authors":"Elijah Schwelling, Kyungjin Yoo","doi":"10.1145/3281505.3281617","DOIUrl":"https://doi.org/10.1145/3281505.3281617","url":null,"abstract":"In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122436177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.
{"title":"VR sickness measurement with EEG using DNN algorithm","authors":"D. Jeong, Sangbong Yoo, Yun Jang","doi":"10.1145/3281505.3283387","DOIUrl":"https://doi.org/10.1145/3281505.3283387","url":null,"abstract":"Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126123766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.
{"title":"A real-time golf-swing training system using sonification and sound image localization","authors":"Yuka Tanaka, Homare Kon, H. Koike","doi":"10.1145/3281505.3281604","DOIUrl":"https://doi.org/10.1145/3281505.3281604","url":null,"abstract":"There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129805925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are pursuing a vision of reactive interior spaces that are aware of people's actions and transform according to changing needs. We envision furniture and walls that act as interactive displays and that shapeshift to the correct physical form, and the appropriate interactive visual content and modality. This paper briefly describes our proposal based on our recent efforts on realizing this vision.
{"title":"Designing dynamic aware interiors","authors":"Y. Kitamura, Kazuki Takashima, Kazuyuki Fujita","doi":"10.1145/3281505.3281603","DOIUrl":"https://doi.org/10.1145/3281505.3281603","url":null,"abstract":"We are pursuing a vision of reactive interior spaces that are aware of people's actions and transform according to changing needs. We envision furniture and walls that act as interactive displays and that shapeshift to the correct physical form, and the appropriate interactive visual content and modality. This paper briefly describes our proposal based on our recent efforts on realizing this vision.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123989564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creativity and innovation training is the core of the art education. Modern technology provides more effective tools to help students obtain artistic creativity. In this paper, we propose to employ augmented reality technology to assist artistic creativity education. We first analyze the inefficiency of traditional artistic creation training. We then introduce our AR-based smartphone app with technical detail and explain how it can improve accelerate artistic creativity training. We finally show 3 examples created by our AR app to demonstrate the effectiveness of our proposed method.
{"title":"An AR system for artistic creativity education","authors":"Jiajia Tan, Boyang Gao, Xiaobo Lu","doi":"10.1145/3281505.3283396","DOIUrl":"https://doi.org/10.1145/3281505.3283396","url":null,"abstract":"Creativity and innovation training is the core of the art education. Modern technology provides more effective tools to help students obtain artistic creativity. In this paper, we propose to employ augmented reality technology to assist artistic creativity education. We first analyze the inefficiency of traditional artistic creation training. We then introduce our AR-based smartphone app with technical detail and explain how it can improve accelerate artistic creativity training. We finally show 3 examples created by our AR app to demonstrate the effectiveness of our proposed method.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127728071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mixed reality offers an immersive and interactive experience through the use of head mounted displays and in-air gestures. Visitors can discover additional content virtually, on top of existing physical items. For a small-scale exhibition at a cafe, we developed a Microsoft HoloLens application to create an interactive experience on top of a collection of historic physical items. Through public experiences of this exhibition, we received positive feedback of our system, and found that it also helped to promote brand perception. In this demo, visitors can experience a similar mixed reality experience that was shown at the exhibition.
{"title":"Using mixed reality for promoting brand perception","authors":"Kelvin Cheng, Ichiro Furusawa","doi":"10.1145/3281505.3281574","DOIUrl":"https://doi.org/10.1145/3281505.3281574","url":null,"abstract":"Mixed reality offers an immersive and interactive experience through the use of head mounted displays and in-air gestures. Visitors can discover additional content virtually, on top of existing physical items. For a small-scale exhibition at a cafe, we developed a Microsoft HoloLens application to create an interactive experience on top of a collection of historic physical items. Through public experiences of this exhibition, we received positive feedback of our system, and found that it also helped to promote brand perception. In this demo, visitors can experience a similar mixed reality experience that was shown at the exhibition.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115770850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}