Cybersickness, related to virtual-reality (VR) using head-mounted devices (HMD), is also known as motion sickness in VR environment. Researchers and developers have been working to find an appropriate technological facility to alleviate this feeling of sickness. In this paper, we aim to further improve VR immersion technique via HMD by strengthening userś sense of presence in the virtual world along with engagement. Our results show that, with alternative ways in the same VR environment, cybersickness could be overcome resulting in user acceptability of VR technology.
{"title":"Experimenting novel virtual-reality immersion strategy to alleviate cybersickness","authors":"S. F. M. Zaidi, T. Male","doi":"10.1145/3281505.3281613","DOIUrl":"https://doi.org/10.1145/3281505.3281613","url":null,"abstract":"Cybersickness, related to virtual-reality (VR) using head-mounted devices (HMD), is also known as motion sickness in VR environment. Researchers and developers have been working to find an appropriate technological facility to alleviate this feeling of sickness. In this paper, we aim to further improve VR immersion technique via HMD by strengthening userś sense of presence in the virtual world along with engagement. Our results show that, with alternative ways in the same VR environment, cybersickness could be overcome resulting in user acceptability of VR technology.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The consumption of 360-degree videos with head-mounted displays (HMDs) is increasing rapidly. A large number of HMD users watch 360-degree videos at home, often on non-swivel seats; however videos are frequently designed to require the user to turn around. This work explores how the difference in users' chair type might influence their viewing experience. A between-subject experiment was conducted with 41 participants. Three chair conditions were used: fixed, half-swivel and full-swivel. A variety of measures were explored using eye-tracking, questionnaires, tasks and semi-structured interviews. Results suggest that the fixed and half-swivel chairs discouraged exploration for certain videos compared with the full-swivel chair. Additionally, participants in the fixed chair had worse spatial awareness and greater concern about missing something for certain video than those in the full-swivel chair. No significant differences were found in terms of incidental memory, general engagement and simulator sickness among the three chair conditions. Furthermore, thematic analysis of post-experiment interviews revealed four themes regarding the restrictive chairs: physical discomfort, difficulty following moving objects, reduced orientation and guided attention. Based on the findings, practical implications, limitations and future work are discussed.
{"title":"The effect of chair type on users' viewing experience for 360-degree video","authors":"Yang Hong, Andrew MacQuarrie, A. Steed","doi":"10.1145/3281505.3281519","DOIUrl":"https://doi.org/10.1145/3281505.3281519","url":null,"abstract":"The consumption of 360-degree videos with head-mounted displays (HMDs) is increasing rapidly. A large number of HMD users watch 360-degree videos at home, often on non-swivel seats; however videos are frequently designed to require the user to turn around. This work explores how the difference in users' chair type might influence their viewing experience. A between-subject experiment was conducted with 41 participants. Three chair conditions were used: fixed, half-swivel and full-swivel. A variety of measures were explored using eye-tracking, questionnaires, tasks and semi-structured interviews. Results suggest that the fixed and half-swivel chairs discouraged exploration for certain videos compared with the full-swivel chair. Additionally, participants in the fixed chair had worse spatial awareness and greater concern about missing something for certain video than those in the full-swivel chair. No significant differences were found in terms of incidental memory, general engagement and simulator sickness among the three chair conditions. Furthermore, thematic analysis of post-experiment interviews revealed four themes regarding the restrictive chairs: physical discomfort, difficulty following moving objects, reduced orientation and guided attention. Based on the findings, practical implications, limitations and future work are discussed.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132951749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomi Nukarinen, J. Kangas, Jussi Rantala, Olli Koskinen, R. Raisamo
Selecting an object is a basic interaction task in virtual reality (VR) environments. Interaction techniques with gaze pointing have potential for this elementary task. There appears to be little empirical evidence concerning the benefits and drawbacks of these methods in VR. We ran an experiment studying three interaction techniques: ray casting, dwell time and gaze trigger, where gaze trigger was a combination of gaze pointing and controller selection. We studied user experience and interaction speed in a simple object selection task. The results indicated that ray casting outperforms both gaze-based methods while gaze trigger performs better than dwell time.
{"title":"Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality","authors":"Tomi Nukarinen, J. Kangas, Jussi Rantala, Olli Koskinen, R. Raisamo","doi":"10.1145/3281505.3283382","DOIUrl":"https://doi.org/10.1145/3281505.3283382","url":null,"abstract":"Selecting an object is a basic interaction task in virtual reality (VR) environments. Interaction techniques with gaze pointing have potential for this elementary task. There appears to be little empirical evidence concerning the benefits and drawbacks of these methods in VR. We ran an experiment studying three interaction techniques: ray casting, dwell time and gaze trigger, where gaze trigger was a combination of gaze pointing and controller selection. We studied user experience and interaction speed in a simple object selection task. The results indicated that ray casting outperforms both gaze-based methods while gaze trigger performs better than dwell time.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129528190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current assistive technologies are complicated, cumbersome, not portable, and users still need to apply extensive fine motor control to operate the device. Brain-Computer Interfaces (BCIs) could provide an alternative approach to solve these problems. However, the current BCIs have low classification accuracy and require tedious human-learning procedures. The use of complicated Electroencephalogram (EEG) caps, where many electrodes must be attached on the user's head to identify imaginary motor commands, brings a lot of inconvenience. In this demonstration, we will showcase EXGbuds, a compact, non-obtrusive, and comfortable wearable device with non-invasive biosensing technology. People can comfortably wear it for long hours without tiring. Under our developed machine learning algorithms, we can identify various eye movements and facial expressions with over 95% accuracy, such that people with motor disabilities could have a fun time to play VR games totally "Hands-free".
{"title":"EXG wearable human-machine interface for natural multimodal interaction in VR environment","authors":"Ker-Jiun Wang, Quanbo Liu, Soumya Vhasure, Quanfeng Liu, C. Zheng, Prakash Thakur","doi":"10.1145/3281505.3281577","DOIUrl":"https://doi.org/10.1145/3281505.3281577","url":null,"abstract":"Current assistive technologies are complicated, cumbersome, not portable, and users still need to apply extensive fine motor control to operate the device. Brain-Computer Interfaces (BCIs) could provide an alternative approach to solve these problems. However, the current BCIs have low classification accuracy and require tedious human-learning procedures. The use of complicated Electroencephalogram (EEG) caps, where many electrodes must be attached on the user's head to identify imaginary motor commands, brings a lot of inconvenience. In this demonstration, we will showcase EXGbuds, a compact, non-obtrusive, and comfortable wearable device with non-invasive biosensing technology. People can comfortably wear it for long hours without tiring. Under our developed machine learning algorithms, we can identify various eye movements and facial expressions with over 95% accuracy, such that people with motor disabilities could have a fun time to play VR games totally \"Hands-free\".","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130908764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Onomatopoeia refers to a word that phonetically imitates the sound. It is often used, in comics or video, in caption as a way to dramatize, emphasize, exaggerate and draw attention the situation. In this paper we explore if the use of onomatopoeia could also bring about similar effects and improve the user experience in virtual reality. We present an experiment comparing the user's subjective experiences and attentive performance in two virtual worlds, each configured in two test conditions: (1) sound feedback with no onomatopoeia and (2) sound feedback with it. Our experiment has found that the moderate and strategic use of onomatopoeia can indeed help direct user attention, offer object affordance and thereby enhance user experience and even the sense of presence and immersion.
{"title":"Effect of accompanying onomatopoeia with sound feedback toward presence and user experience in virtual reality","authors":"Jiwon Oh, G. Kim","doi":"10.1145/3281505.3283401","DOIUrl":"https://doi.org/10.1145/3281505.3283401","url":null,"abstract":"Onomatopoeia refers to a word that phonetically imitates the sound. It is often used, in comics or video, in caption as a way to dramatize, emphasize, exaggerate and draw attention the situation. In this paper we explore if the use of onomatopoeia could also bring about similar effects and improve the user experience in virtual reality. We present an experiment comparing the user's subjective experiences and attentive performance in two virtual worlds, each configured in two test conditions: (1) sound feedback with no onomatopoeia and (2) sound feedback with it. Our experiment has found that the moderate and strategic use of onomatopoeia can indeed help direct user attention, offer object affordance and thereby enhance user experience and even the sense of presence and immersion.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133554979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In most of the cases, the estimated calories are just associated with the estimated food categories, or the relative size compared to the standard size of each food category which are usually provided by a user manually. In addition, in the case of calorie estimation based on the amount of meal, a user conventionally needs to register a size-known reference object in advance and to take a food photo with the registered reference object. In this demo, we propose a new approach for food calorie estimation with CNN and Augmented Reality (AR)-based actual size estimation. By using Apple ARKit framework, we can measure the actual size of the meal area by acquiring the coordinates on the real world as a three-dimensional vector, we implemented this demo app. As a result, it is possible to calculate the size more accurately than in the previous method by measuring the meal area directly, the calorie estimation accuracy has improved.
{"title":"AR DeepCalorieCam V2: food calorie estimation with CNN and AR-based actual size estimation","authors":"Ryosuke Tanno, Takumi Ege, Keiji Yanai","doi":"10.1145/3281505.3281580","DOIUrl":"https://doi.org/10.1145/3281505.3281580","url":null,"abstract":"In most of the cases, the estimated calories are just associated with the estimated food categories, or the relative size compared to the standard size of each food category which are usually provided by a user manually. In addition, in the case of calorie estimation based on the amount of meal, a user conventionally needs to register a size-known reference object in advance and to take a food photo with the registered reference object. In this demo, we propose a new approach for food calorie estimation with CNN and Augmented Reality (AR)-based actual size estimation. By using Apple ARKit framework, we can measure the actual size of the meal area by acquiring the coordinates on the real world as a three-dimensional vector, we implemented this demo app. As a result, it is possible to calculate the size more accurately than in the previous method by measuring the meal area directly, the calorie estimation accuracy has improved.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122203648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Higher resolution, wider FOV and increasing frame rate of HMD are demanding more VR computing resources. Foveated rendering is a key solution to these challenges. This paper introduces a perceptual model optimized foveated rendering. Tessellation levels and culling areas are adaptively adjusted based on visual sensitivity. We improve rendering performance while satisfying visual perception.
{"title":"Perceptual model optimized efficient foveated rendering","authors":"Zipeng Zheng, Zhuo Yang, Yinwei Zhan, Yuqing Li, Wenxin Yu","doi":"10.1145/3281505.3281588","DOIUrl":"https://doi.org/10.1145/3281505.3281588","url":null,"abstract":"Higher resolution, wider FOV and increasing frame rate of HMD are demanding more VR computing resources. Foveated rendering is a key solution to these challenges. This paper introduces a perceptual model optimized foveated rendering. Tessellation levels and culling areas are adaptively adjusted based on visual sensitivity. We improve rendering performance while satisfying visual perception.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124560735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.
{"title":"Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment","authors":"Elijah Schwelling, Kyungjin Yoo","doi":"10.1145/3281505.3281576","DOIUrl":"https://doi.org/10.1145/3281505.3281576","url":null,"abstract":"In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123346370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present "Hamlet", a prototype implementation of a virtual reality experience in which a player takes on a role of the theater director. The objective of the experience is to direct Adam, a virtual actor, to deliver the best possible performance of Hamlet's famous "To be, or not to be" soliloquy. The player interacts with Adam using voice commands, gestures, and body motion. Adam responds to acting directions, offers his own interpretations of the soliloquy, acquires the choreography from the player's body motion, and learns the scene blocking by following the player's pointing gestures.
{"title":"Hamlet","authors":"Krzysztof Pietroszek, C. Eckhardt, Liudmila Tahai","doi":"10.1145/3281505.3281600","DOIUrl":"https://doi.org/10.1145/3281505.3281600","url":null,"abstract":"We present \"Hamlet\", a prototype implementation of a virtual reality experience in which a player takes on a role of the theater director. The objective of the experience is to direct Adam, a virtual actor, to deliver the best possible performance of Hamlet's famous \"To be, or not to be\" soliloquy. The player interacts with Adam using voice commands, gestures, and body motion. Adam responds to acting directions, offers his own interpretations of the soliloquy, acquires the choreography from the player's body motion, and learns the scene blocking by following the player's pointing gestures.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115729149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Balloonygen, an extended tabletop display embedded with a balloon-like deformable spherical screen, is a display that can seamlessly expose a spherical screen for three-dimensional contents, such as omnidirectional images, in a conventional flat display. By continuously morphing between a two-dimensional shape called tabletop and a three-dimensional shape called sphere, we render the benefits of a flat display and a spherical display to coexist and propose a smoother approach for information sharing. Balloonygen dynamically provides an optimal way to display the contents by inflating the rubber membrane installed at the center of a tabletop display and morphing between the two- and three-dimensional shapes. In this study, by prototyping and designing the application scenario, we discuss the advantages and disadvantages of this display and possible interactions involved.
{"title":"Balloonygen","authors":"Soichiro Toyohara, Toshiki Sato, H. Koike","doi":"10.1145/3281505.3281532","DOIUrl":"https://doi.org/10.1145/3281505.3281532","url":null,"abstract":"Balloonygen, an extended tabletop display embedded with a balloon-like deformable spherical screen, is a display that can seamlessly expose a spherical screen for three-dimensional contents, such as omnidirectional images, in a conventional flat display. By continuously morphing between a two-dimensional shape called tabletop and a three-dimensional shape called sphere, we render the benefits of a flat display and a spherical display to coexist and propose a smoother approach for information sharing. Balloonygen dynamically provides an optimal way to display the contents by inflating the rubber membrane installed at the center of a tabletop display and morphing between the two- and three-dimensional shapes. In this study, by prototyping and designing the application scenario, we discuss the advantages and disadvantages of this display and possible interactions involved.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117332237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}