S. Elvitigala, Samantha W. T. Chan, Noura Howell, Denys J. C. Matthies, Suranga Nanayakkara
Interactive displays are increasingly embedded into the architecture we inhabit. We designed Doodle Daydream, an LED grid display with a mobile interface, which acts similarly to a shared sketch pad. Users contribute their own custom drawings that immediately play back on the display, fostering moment-to-moment playful interactions. This project builds on related work by designing a collaborative display to support calming yet playful interactions in an office setting.
{"title":"Doodle Daydream: An Interactive Display to Support Playful and Creative Interactions Between Co-workers","authors":"S. Elvitigala, Samantha W. T. Chan, Noura Howell, Denys J. C. Matthies, Suranga Nanayakkara","doi":"10.1145/3267782.3274681","DOIUrl":"https://doi.org/10.1145/3267782.3274681","url":null,"abstract":"Interactive displays are increasingly embedded into the architecture we inhabit. We designed Doodle Daydream, an LED grid display with a mobile interface, which acts similarly to a shared sketch pad. Users contribute their own custom drawings that immediately play back on the display, fostering moment-to-moment playful interactions. This project builds on related work by designing a collaborative display to support calming yet playful interactions in an office setting.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123374498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Wozniak, Antonio Capobianco, N. Javahiraly, D. Curticapean
We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment (PE) by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).
{"title":"Towards Unobtrusive Obstacle Detection and Notification for Virtual Reality Using Metaphors","authors":"P. Wozniak, Antonio Capobianco, N. Javahiraly, D. Curticapean","doi":"10.1145/3267782.3274682","DOIUrl":"https://doi.org/10.1145/3267782.3274682","url":null,"abstract":"We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment (PE) by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122849580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to graphology, people's emotional states can be detected from their handwriting. Unlike writing on paper, which can be analysed through its on-surface properties, spatial interaction-based handwriting is entirely in-air. Consequently, the techniques used in graphology to reveal the emotions of the writer are not directly transferable to spatial interaction. The purpose of our research is to propose a 3D handwriting system with emotional capabilities. For our study, we retained height basic emotions represented by a large spectrum of coordinates in the Russell's valence-arousal model: afraid, angry, disgusted, happy, sad, surprised, amorous and serious. We used the Leap Motion sensor (https://www.leapmotion.com) to capture hand motion; C# and the Unity 3D game engine (https://unity3d.com) for the 3D rendering of the handwritten characters. With our system, users can write freely with their fingers in the air and immerse themselves in their handwriting by wearing a virtual reality headset. We aim to create a rendering model that can be universally applied to any handwriting and any alphabet: our choice of parameters is inspired by both Latin typography and Chinese calligraphy, characterised by its four elementary writing instruments: the brush, the ink, the brush-stand and the ink-stone. The final parameter selection process was carried out by immersing ourselves in our own in-air handwriting and through numerous trials. The five rendering parameters we chose are: (1) weight determined by the radius of the rendered stroke; (2) smoothness determined by the minimum length of one stroke segment; (3) tip of stroke determined by the ratio of the radius to the writing speed; (4) ink density determined by the opacity of the rendering material; and (5) ink dryness determined by the texture of the rendering material, which can be coarse or smooth. Having implemented the 3D handwriting system and empirically determined five rendering parameters, we designed a survey to gather opinions on which rendering parameters' values are most effective at conveying the intended emotions. For each parameter, we created three handwriting samples by varying the value of the parameter, and to avoid the combinatorial explosion of the number of samples, each parameter was made to vary independently of the others. The formula we used to calculate the optimal value of a parameter is as follows: Where i = 1, 2, 3 refers to the value of the parameter used in the sample; Q is the total number of respondents (64 in average); qi is the number of people who chose that sample; and Pi denotes the parameter. Applying the R values to the 3D handwriting system in Unity, we obtain the eight emotional styles illustrated below. We calculated the Euclidean distances between each pair of emotions using their 2D coordinates (x, y) in the Russell's valence-arousal emotion model and their 5-dimensional vectors of normalised parameters' values. Across all pairs of emotions, there is a po
{"title":"An Emotional Spatial Handwriting System","authors":"Ziqian Chen, M. Bourguet, G. Venture","doi":"10.1145/3267782.3274679","DOIUrl":"https://doi.org/10.1145/3267782.3274679","url":null,"abstract":"According to graphology, people's emotional states can be detected from their handwriting. Unlike writing on paper, which can be analysed through its on-surface properties, spatial interaction-based handwriting is entirely in-air. Consequently, the techniques used in graphology to reveal the emotions of the writer are not directly transferable to spatial interaction. The purpose of our research is to propose a 3D handwriting system with emotional capabilities. For our study, we retained height basic emotions represented by a large spectrum of coordinates in the Russell's valence-arousal model: afraid, angry, disgusted, happy, sad, surprised, amorous and serious. We used the Leap Motion sensor (https://www.leapmotion.com) to capture hand motion; C# and the Unity 3D game engine (https://unity3d.com) for the 3D rendering of the handwritten characters. With our system, users can write freely with their fingers in the air and immerse themselves in their handwriting by wearing a virtual reality headset. We aim to create a rendering model that can be universally applied to any handwriting and any alphabet: our choice of parameters is inspired by both Latin typography and Chinese calligraphy, characterised by its four elementary writing instruments: the brush, the ink, the brush-stand and the ink-stone. The final parameter selection process was carried out by immersing ourselves in our own in-air handwriting and through numerous trials. The five rendering parameters we chose are: (1) weight determined by the radius of the rendered stroke; (2) smoothness determined by the minimum length of one stroke segment; (3) tip of stroke determined by the ratio of the radius to the writing speed; (4) ink density determined by the opacity of the rendering material; and (5) ink dryness determined by the texture of the rendering material, which can be coarse or smooth. Having implemented the 3D handwriting system and empirically determined five rendering parameters, we designed a survey to gather opinions on which rendering parameters' values are most effective at conveying the intended emotions. For each parameter, we created three handwriting samples by varying the value of the parameter, and to avoid the combinatorial explosion of the number of samples, each parameter was made to vary independently of the others. The formula we used to calculate the optimal value of a parameter is as follows: Where i = 1, 2, 3 refers to the value of the parameter used in the sample; Q is the total number of respondents (64 in average); qi is the number of people who chose that sample; and Pi denotes the parameter. Applying the R values to the 3D handwriting system in Unity, we obtain the eight emotional styles illustrated below. We calculated the Euclidean distances between each pair of emotions using their 2D coordinates (x, y) in the Russell's valence-arousal emotion model and their 5-dimensional vectors of normalised parameters' values. Across all pairs of emotions, there is a po","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sungchul Jung, G. Bruder, P. Wisniewski, C. Sandor, C. Hughes
When estimating the distance or size of an object in the real world, we often use our own body as a metric; this strategy is called body-based scaling. However, object size estimation in a virtual environment presented via a head-mounted display differs from the physical world due to technical limitations such as narrow field of view and low fidelity of the virtual body when compared to one's real body. In this paper, we focus on increasing the fidelity of a participant's body representation in virtual environments with a personalized hand using personalized characteristics and a visually faithful augmented virtuality approach. To investigate the impact of the personalized hand, we compared it against a generic virtual hand and measured effects on virtual body ownership, spatial presence, and object size estimation. Specifically, we asked participants to perform a perceptual matching task that was based on scaling a virtual box on a table in front of them. Our results show that the personalized hand not only increased virtual body ownership and spatial presence, but also supported participants in correctly estimating the size of a virtual object in the proximity of their hand.
{"title":"Over My Hand: Using a Personalized Hand in VR to Improve Object Size Estimation, Body Ownership, and Presence","authors":"Sungchul Jung, G. Bruder, P. Wisniewski, C. Sandor, C. Hughes","doi":"10.1145/3267782.3267920","DOIUrl":"https://doi.org/10.1145/3267782.3267920","url":null,"abstract":"When estimating the distance or size of an object in the real world, we often use our own body as a metric; this strategy is called body-based scaling. However, object size estimation in a virtual environment presented via a head-mounted display differs from the physical world due to technical limitations such as narrow field of view and low fidelity of the virtual body when compared to one's real body. In this paper, we focus on increasing the fidelity of a participant's body representation in virtual environments with a personalized hand using personalized characteristics and a visually faithful augmented virtuality approach. To investigate the impact of the personalized hand, we compared it against a generic virtual hand and measured effects on virtual body ownership, spatial presence, and object size estimation. Specifically, we asked participants to perform a perceptual matching task that was based on scaling a virtual box on a table in front of them. Our results show that the personalized hand not only increased virtual body ownership and spatial presence, but also supported participants in correctly estimating the size of a virtual object in the proximity of their hand.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121811098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"Hands-free" pointing techniques used in mid-air gesture interaction require precise motor control and dexterity. Although being applied in a growing number of interaction contexts over the past few years, this input method can be challenging for older users (60+ years old) who experience natural decline in pointing abilities due to natural ageing process. We report the findings of a target acquisition experiment in which older adults had to perform "point-and-select" gestures in mid-air. The experiment investigated the effect of 6 feedback conditions on pointing and selection performance of older users. Our findings suggest that the bimodal combination of Visual and Audio feedback lead to faster target selection times for older adults, but did not lead to making less errors. Furthermore, target location on screen was found to play a more important role in both selection time and accuracy of point-and-select tasks than feedback type.
{"title":"Evaluating the Effects of Feedback Type on Older Adults' Performance in Mid-Air Pointing and Target Selection","authors":"A. Cabreira, F. Hwang","doi":"10.1145/3267782.3267933","DOIUrl":"https://doi.org/10.1145/3267782.3267933","url":null,"abstract":"\"Hands-free\" pointing techniques used in mid-air gesture interaction require precise motor control and dexterity. Although being applied in a growing number of interaction contexts over the past few years, this input method can be challenging for older users (60+ years old) who experience natural decline in pointing abilities due to natural ageing process. We report the findings of a target acquisition experiment in which older adults had to perform \"point-and-select\" gestures in mid-air. The experiment investigated the effect of 6 feedback conditions on pointing and selection performance of older users. Our findings suggest that the bimodal combination of Visual and Audio feedback lead to faster target selection times for older adults, but did not lead to making less errors. Furthermore, target location on screen was found to play a more important role in both selection time and accuracy of point-and-select tasks than feedback type.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124488673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster, we present a novel 3-dimensional (3D) interaction technique, Altered Muscle Mapping (AMM), to re-map muscle movements of hands/arms to fingers/wrists. We implemented an initial design of AMM as a 3-Dimensional (3D) selection technique, where finger movements translate a virtual cursor (in 3-degrees-of-freedom) for selection. Direct Manipulation performance benefits may be preserved yet reduce physical fatigue. We designed an initial set of mapping variations. Our results from an initial pilot study provide initial performance insights of mapping configurations. AMM has potential for direct hand interaction in virtual and augmented reality and for users with a limited range of motion.
{"title":"An Exploration of Altered Muscle Mappings of Arm to Finger Control for 3D Selection","authors":"E. O. Hunt, Amy Banic","doi":"10.1145/3267782.3275241","DOIUrl":"https://doi.org/10.1145/3267782.3275241","url":null,"abstract":"In this poster, we present a novel 3-dimensional (3D) interaction technique, Altered Muscle Mapping (AMM), to re-map muscle movements of hands/arms to fingers/wrists. We implemented an initial design of AMM as a 3-Dimensional (3D) selection technique, where finger movements translate a virtual cursor (in 3-degrees-of-freedom) for selection. Direct Manipulation performance benefits may be preserved yet reduce physical fatigue. We designed an initial set of mapping variations. Our results from an initial pilot study provide initial performance insights of mapping configurations. AMM has potential for direct hand interaction in virtual and augmented reality and for users with a limited range of motion.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128207977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the development and implementation of "Virtual Campus", a prototype that brings together a set of interfaces and interaction techniques (VR, AR, mobile apps, 3D), to generate alternatives to the systems that are traditionally used (web, desktop applications, etc.), for the spatial management of the campus in Universidad Católica de Pereira, such as the reservation of classrooms, objects, zones, security, among others.1
本文描述了“虚拟校园”的开发和实现,这是一个原型,汇集了一组界面和交互技术(VR, AR,移动应用程序,3D),以生成替代传统使用的系统(web,桌面应用程序等),用于universsidad Católica de Pereira校园的空间管理,例如教室,对象,区域,安全等的预订
{"title":"Virtual Campus: Infrastructure and spatiality management tools based on 3D environments","authors":"Tatiana Sánchez Botero, Alejandro Montes Muñoz","doi":"10.1145/3267782.3274676","DOIUrl":"https://doi.org/10.1145/3267782.3274676","url":null,"abstract":"This paper describes the development and implementation of \"Virtual Campus\", a prototype that brings together a set of interfaces and interaction techniques (VR, AR, mobile apps, 3D), to generate alternatives to the systems that are traditionally used (web, desktop applications, etc.), for the spatial management of the campus in Universidad Católica de Pereira, such as the reservation of classrooms, objects, zones, security, among others.1","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"239 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116124067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denys J. C. Matthies, Marian Haescher, Suranga Nanayakkara, G. Bieber
Smartwatches enable spatial user input, namely for the continuous tracking of physical activity and relevant health parameters. Additionally, smartwatches are experiencing greater social acceptability, even among the elderly. While step counting is an essential parameter to calculate the user's spatial activity, current detection algorithms are insufficient for calculating steps when using a rollator, which is a common walking aid for elderly people. Through a pilot study conducted with eight different wrist-worn smart devices, an overall recognition of ~10% was achieved. This is because characteristic motions utilized by step counting algorithms are poorly reflected at the user's wrist when pushing a rollator. This issue is also present among other spatial activities such as pushing a pram, a bike, and a shopping cart. This paper thus introduces an improved step counting algorithm for wrist-worn accelerometers. This new algorithm was first evaluated through a controlled study and achieved promising results with an overall recognition of ~85%. As a follow-up, a preliminary field study with randomly selected elderly people who used rollators resulted in similar detection rates of ~83%. To conclude, this research will expectantly contribute to greater step counting precision in smart wearable technology.
{"title":"Step Detection for Rollator Users with Smartwatches","authors":"Denys J. C. Matthies, Marian Haescher, Suranga Nanayakkara, G. Bieber","doi":"10.1145/3267782.3267784","DOIUrl":"https://doi.org/10.1145/3267782.3267784","url":null,"abstract":"Smartwatches enable spatial user input, namely for the continuous tracking of physical activity and relevant health parameters. Additionally, smartwatches are experiencing greater social acceptability, even among the elderly. While step counting is an essential parameter to calculate the user's spatial activity, current detection algorithms are insufficient for calculating steps when using a rollator, which is a common walking aid for elderly people. Through a pilot study conducted with eight different wrist-worn smart devices, an overall recognition of ~10% was achieved. This is because characteristic motions utilized by step counting algorithms are poorly reflected at the user's wrist when pushing a rollator. This issue is also present among other spatial activities such as pushing a pram, a bike, and a shopping cart. This paper thus introduces an improved step counting algorithm for wrist-worn accelerometers. This new algorithm was first evaluated through a controlled study and achieved promising results with an overall recognition of ~85%. As a follow-up, a preliminary field study with randomly selected elderly people who used rollators resulted in similar detection rates of ~83%. To conclude, this research will expectantly contribute to greater step counting precision in smart wearable technology.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"176 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120964616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human--computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed thirteen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper.
{"title":"MagicPAPER: An Integrated Shadow-Art Hardware Device Enabling Touch Interaction on Kraft paper","authors":"Sirui Wang, Jiayuan Wang, Qin Wu","doi":"10.1145/3267782.3274689","DOIUrl":"https://doi.org/10.1145/3267782.3274689","url":null,"abstract":"As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human--computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed thirteen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present two experiments evaluating the effectiveness of the eye as a controller for travel in virtual reality (VR). We used the FOVE head-mounted display (HMD), which includes an eye tracker. The first experiment compared seven different travel techniques to control movement direction while flying through target rings. The second experiment involved travel on a terrain: moving to waypoints while avoiding obstacles with three travel techniques. Results of the first experiment indicate that performance of the eye tracker with head-tracking was close to head motion alone, and better than eye-tracking alone. The second experiment revealed that completion times of all three techniques were very close. Overall, eye-based travel suffered from calibration issues and yielded much higher cybersickness than head-based approaches.
{"title":"Look to Go: An Empirical Evaluation of Eye-Based Travel in Virtual Reality","authors":"Y. Qian, Robert J. Teather","doi":"10.1145/3267782.3267798","DOIUrl":"https://doi.org/10.1145/3267782.3267798","url":null,"abstract":"We present two experiments evaluating the effectiveness of the eye as a controller for travel in virtual reality (VR). We used the FOVE head-mounted display (HMD), which includes an eye tracker. The first experiment compared seven different travel techniques to control movement direction while flying through target rings. The second experiment involved travel on a terrain: moving to waypoints while avoiding obstacles with three travel techniques. Results of the first experiment indicate that performance of the eye tracker with head-tracking was close to head motion alone, and better than eye-tracking alone. The second experiment revealed that completion times of all three techniques were very close. Overall, eye-based travel suffered from calibration issues and yielded much higher cybersickness than head-based approaches.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122992947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}