In this paper, we present three visualizations showing heart rate (HR) data collected over time. Two visualizations present a summary chart (bar or radial chart), summarizing the amount of time spent per HR zone (i.e., low, moderate, high intensity). We conducted a pilot study with five participants to evaluate the efficiency of the visualizations when monitoring the intensity of an activity while playing a tennis-like Virtual Reality game. Preliminary results show that participants were performing (with respect to time and accuracy) better with and preferred the bar chart summary.
{"title":"Study of Heart Rate Visualizations on a Virtual Smartwatch","authors":"Fairouz Grioui, Tanja Blascheck","doi":"10.1145/3489849.3489913","DOIUrl":"https://doi.org/10.1145/3489849.3489913","url":null,"abstract":"In this paper, we present three visualizations showing heart rate (HR) data collected over time. Two visualizations present a summary chart (bar or radial chart), summarizing the amount of time spent per HR zone (i.e., low, moderate, high intensity). We conducted a pilot study with five participants to evaluate the efficiency of the visualizations when monitoring the intensity of an activity while playing a tennis-like Virtual Reality game. Preliminary results show that participants were performing (with respect to time and accuracy) better with and preferred the bar chart summary.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125291313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a virtual reality training simulator for medical interns practicing ultrasound-guided regional anaesthesia (UGRA). UGRA is a type of nerve block procedure performed commonly by critical care doctors such as anaesthetists, emergency medicine physicians, and paramedics. This procedure is complex and requires intense training. It is traditionally taught one-on-one by experts and is performed on simulated models long before attempting the procedure on live patients. We present our virtual reality application that allows for training this procedure in a simulated environment. The use of virtual reality makes training future doctors performing UGRA safer and more cost efficient than current approaches.
{"title":"UGRA in VR: A Virtual Reality Simulation for Training Anaesthetists","authors":"A. Bogdanovych, A. Chuan","doi":"10.1145/3489849.3489924","DOIUrl":"https://doi.org/10.1145/3489849.3489924","url":null,"abstract":"We present a virtual reality training simulator for medical interns practicing ultrasound-guided regional anaesthesia (UGRA). UGRA is a type of nerve block procedure performed commonly by critical care doctors such as anaesthetists, emergency medicine physicians, and paramedics. This procedure is complex and requires intense training. It is traditionally taught one-on-one by experts and is performed on simulated models long before attempting the procedure on live patients. We present our virtual reality application that allows for training this procedure in a simulated environment. The use of virtual reality makes training future doctors performing UGRA safer and more cost efficient than current approaches.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131845091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaxuan Li, Yongjae Yoo, Antoine Weill--Duflos, J. Cooperstock
The application of haptic technology in entertainment systems, such as Virtual Reality and 4D cinema, enables novel experiences for users and drives the demand for efficient haptic authoring systems. Here, we propose an automatic multimodal vibrotactile content creation pipeline that substantially improves the overall hapto-audiovisual (HAV) experience based on contextual audio and visual content from movies. Our algorithm is implemented on a low-cost system with nine actuators attached to a viewing chair and extracts significant features from video files to generate corresponding haptic stimuli. We implemented this pipeline and used the resulting system in a user study (n = 16), quantifying user experience according to the sense of immersion, preference, harmony, and discomfort. The results indicate that the haptic patterns generated by our algorithm complement the movie content and provide an immersive and enjoyable HAV user experience. This further suggests that the pipeline can facilitate the efficient creation of 4D effects and could therefore be applied to improve the viewing experience in home theatre environments.
{"title":"Towards Context-aware Automatic Haptic Effect Generation for Home Theatre Environments","authors":"Yaxuan Li, Yongjae Yoo, Antoine Weill--Duflos, J. Cooperstock","doi":"10.1145/3489849.3489887","DOIUrl":"https://doi.org/10.1145/3489849.3489887","url":null,"abstract":"The application of haptic technology in entertainment systems, such as Virtual Reality and 4D cinema, enables novel experiences for users and drives the demand for efficient haptic authoring systems. Here, we propose an automatic multimodal vibrotactile content creation pipeline that substantially improves the overall hapto-audiovisual (HAV) experience based on contextual audio and visual content from movies. Our algorithm is implemented on a low-cost system with nine actuators attached to a viewing chair and extracts significant features from video files to generate corresponding haptic stimuli. We implemented this pipeline and used the resulting system in a user study (n = 16), quantifying user experience according to the sense of immersion, preference, harmony, and discomfort. The results indicate that the haptic patterns generated by our algorithm complement the movie content and provide an immersive and enjoyable HAV user experience. This further suggests that the pipeline can facilitate the efficient creation of 4D effects and could therefore be applied to improve the viewing experience in home theatre environments.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115116685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saliha Akbas, K. Kuscu, A. Yantaç, Gizem Erdem, Sinem Semsioglu, Onur Gurkan, A. Günay, A. Vatansever, T. Eskenazi
Self-reflection is evaluation of one’s inferential processes often triggered by complex social and emotional experiences, characterized by their ambiguity and unpredictability, pushing one to re-interpret the experience, and update existing knowledge. Using immersive Virtual Reality (VR), we aimed to support social and emotional learning (SEL) through reflection in psychology education. We used the case of psychosis as it involves ambiguous perceptual experiences. With a codesign workshop, we designed a VR prototype that simulates the perceptual, cognitive, affective, and social elements of psychotic experiences, followed by a user-study with psychology students to evaluate the potential of this technology to support reflection. Our analyses suggested that technology-mediated reflection in SEL involves two dimensions: spontaneous perspective-taking and shared state of affect. By exploring the subjective qualities of reflection with the said dimensions, our work contributes to the literature on technology-supported learning and VR developers designing for reflection.
{"title":"Qualitative Dimensions of Technology-Mediated Reflective Learning: The Case of VR Experience of Psychosis","authors":"Saliha Akbas, K. Kuscu, A. Yantaç, Gizem Erdem, Sinem Semsioglu, Onur Gurkan, A. Günay, A. Vatansever, T. Eskenazi","doi":"10.1145/3489849.3489869","DOIUrl":"https://doi.org/10.1145/3489849.3489869","url":null,"abstract":"Self-reflection is evaluation of one’s inferential processes often triggered by complex social and emotional experiences, characterized by their ambiguity and unpredictability, pushing one to re-interpret the experience, and update existing knowledge. Using immersive Virtual Reality (VR), we aimed to support social and emotional learning (SEL) through reflection in psychology education. We used the case of psychosis as it involves ambiguous perceptual experiences. With a codesign workshop, we designed a VR prototype that simulates the perceptual, cognitive, affective, and social elements of psychotic experiences, followed by a user-study with psychology students to evaluate the potential of this technology to support reflection. Our analyses suggested that technology-mediated reflection in SEL involves two dimensions: spontaneous perspective-taking and shared state of affect. By exploring the subjective qualities of reflection with the said dimensions, our work contributes to the literature on technology-supported learning and VR developers designing for reflection.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132473357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we report both absolute and differential thresholds for motion in the six cardinal directions as comprehensively as possible. As with general 4D motion effects, we used sinusoidal motions with low intensity and large frequency as stimuli. Hence, we could also compare the effectiveness of motion types in delivering motion effects. We found that the thresholds for the z-axis (up-down) were higher than those for the x-axis (front-back) and y-axis (left-right) in both kinds of thresholds and that the type of motion significantly affected both thresholds. Further, between differential thresholds and reference intensities, we found a strong linear relationship for roll, yaw and, surge. Compared to them, a relatively weak linear relationship was observed for the rest of the motion types. Our results can be useful for generating motion effects for 4D contents while considering the human sensitivity to motion feedback.
在本文中,我们尽可能全面地报告了六个主要方向运动的绝对阈值和微分阈值。与一般的 4D 运动效应一样,我们使用了强度低、频率高的正弦运动作为刺激。因此,我们也可以比较运动类型在产生运动效应方面的效果。我们发现,在两种阈值中,z 轴(上下)的阈值都高于 x 轴(前后)和 y 轴(左右)的阈值,而且运动类型对这两种阈值都有显著影响。此外,我们还发现,在差异阈值和参考强度之间,滚动、偏航和激增具有很强的线性关系。与之相比,其余运动类型的线性关系相对较弱。考虑到人类对运动反馈的敏感性,我们的研究结果可用于生成 4D 内容的运动效果。
{"title":"Absolute and Differential Thresholds of Motion Effects in Cardinal Directions","authors":"Jiwan Lee, Jaejun Park, Seungmoon Choi","doi":"10.1145/3489849.3489870","DOIUrl":"https://doi.org/10.1145/3489849.3489870","url":null,"abstract":"In this paper, we report both absolute and differential thresholds for motion in the six cardinal directions as comprehensively as possible. As with general 4D motion effects, we used sinusoidal motions with low intensity and large frequency as stimuli. Hence, we could also compare the effectiveness of motion types in delivering motion effects. We found that the thresholds for the z-axis (up-down) were higher than those for the x-axis (front-back) and y-axis (left-right) in both kinds of thresholds and that the type of motion significantly affected both thresholds. Further, between differential thresholds and reference intensities, we found a strong linear relationship for roll, yaw and, surge. Compared to them, a relatively weak linear relationship was observed for the rest of the motion types. Our results can be useful for generating motion effects for 4D contents while considering the human sensitivity to motion feedback.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132686026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Judith Hartfill, Jenny Gabel, Lucie Kruse, S. Schmidt, Kevin Riebandt, Simone Kühn, Frank Steinicke
Avatars in virtual reality (VR) with fully articulated hands enable users to naturally interact with the virtual environment (VE). Interactions are often performed in a one-to-one mapping between the movements of the user’s real body, for instance, the hands, and the displayed body of the avatar. However, VR also allows manipulating this mapping to introduce non-isomorphic techniques. In this context, research on manipulations of virtual hand movements typically focuses on increasing the user’s interaction space to improve the overall efficiency of hand-based interactions. In this paper, we investigate a hand retargeting method for decelerated hand movements. With this technique, users need to perform larger movements to reach for an object in the VE, which can be utilized, for example, in therapeutic applications. If these gain-based redirections of virtual hand movements are small enough, users become unable to reliably detect them due to the dominance of the visual sense. In a psychophysical experiment, we analyzed detection thresholds for six different motion paths in mid-air for both hands. We found significantly different detection thresholds between movement directions on each spatial axis. To verify our findings, we applied the identified gains in a playful application in a confirmatory study.
{"title":"Analysis of Detection Thresholds for Hand Redirection during Mid-Air Interactions in Virtual Reality","authors":"Judith Hartfill, Jenny Gabel, Lucie Kruse, S. Schmidt, Kevin Riebandt, Simone Kühn, Frank Steinicke","doi":"10.1145/3489849.3489866","DOIUrl":"https://doi.org/10.1145/3489849.3489866","url":null,"abstract":"Avatars in virtual reality (VR) with fully articulated hands enable users to naturally interact with the virtual environment (VE). Interactions are often performed in a one-to-one mapping between the movements of the user’s real body, for instance, the hands, and the displayed body of the avatar. However, VR also allows manipulating this mapping to introduce non-isomorphic techniques. In this context, research on manipulations of virtual hand movements typically focuses on increasing the user’s interaction space to improve the overall efficiency of hand-based interactions. In this paper, we investigate a hand retargeting method for decelerated hand movements. With this technique, users need to perform larger movements to reach for an object in the VE, which can be utilized, for example, in therapeutic applications. If these gain-based redirections of virtual hand movements are small enough, users become unable to reliably detect them due to the dominance of the visual sense. In a psychophysical experiment, we analyzed detection thresholds for six different motion paths in mid-air for both hands. We found significantly different detection thresholds between movement directions on each spatial axis. To verify our findings, we applied the identified gains in a playful application in a confirmatory study.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114810839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aviv Elor, Tiffany Thang, B. Hughes, A. Crosby, Amy Phung, Everardo Gonzalez, K. Katija, S. Haddock, E. Martin, B. Erwin, L. Takayama
Remotely Operated Vehicles (ROVs) are essential to human-operated underwater expeditions in the deep sea. However, piloting an ROV to safely interact with live ecosystems is an expensive and cognitively demanding task, requiring extensive maneuvering and situational awareness. Immersive Virtual Reality (VR) Head-Mounted Displays (HMDs) could address some of these challenges. This paper investigates how VR HMDs influence operator performance through a novel telepresence system for piloting ROVs in real-time. We present an empirical user study [N=12] that examines common midwater creature capture tasks, comparing Stereoscopic-VR, Monoscopic-VR, and Desktop teleoperation conditions. Our findings indicate that Stereoscopic-VR can outperform Monoscopic-VR and Desktop ROV capture tasks, effectively doubling the efficacy of operators. We also found significant differences in presence, task load, usability, intrinsic motivation, and cybersickness. Our research points to new opportunities towards VR with ROVs.
{"title":"Catching Jellies in Immersive Virtual Reality: A Comparative Teleoperation Study of ROVs in Underwater Capture Tasks","authors":"Aviv Elor, Tiffany Thang, B. Hughes, A. Crosby, Amy Phung, Everardo Gonzalez, K. Katija, S. Haddock, E. Martin, B. Erwin, L. Takayama","doi":"10.1145/3489849.3489861","DOIUrl":"https://doi.org/10.1145/3489849.3489861","url":null,"abstract":"Remotely Operated Vehicles (ROVs) are essential to human-operated underwater expeditions in the deep sea. However, piloting an ROV to safely interact with live ecosystems is an expensive and cognitively demanding task, requiring extensive maneuvering and situational awareness. Immersive Virtual Reality (VR) Head-Mounted Displays (HMDs) could address some of these challenges. This paper investigates how VR HMDs influence operator performance through a novel telepresence system for piloting ROVs in real-time. We present an empirical user study [N=12] that examines common midwater creature capture tasks, comparing Stereoscopic-VR, Monoscopic-VR, and Desktop teleoperation conditions. Our findings indicate that Stereoscopic-VR can outperform Monoscopic-VR and Desktop ROV capture tasks, effectively doubling the efficacy of operators. We also found significant differences in presence, task load, usability, intrinsic motivation, and cybersickness. Our research points to new opportunities towards VR with ROVs.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134022177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the emergence of multi-vehicular autonomous systems, such as AI controlled multiple fully autonomous vehicles, we need novel systems that provide tools for planning, executing, and reviewing of missions and keeping humans in the loop during all phases. We therefore present an immersive visualization system for interacting with these systems at a higher cognitive level than piloting of individual vehicles. Our system provides both desktop and VR modes for visual interaction with the robotic multi-vehicle AI system.
{"title":"Immersive Visual Interaction with Autonomous Multi-Vehicle Systems","authors":"Ali Samini, P. Ljung","doi":"10.1145/3489849.3489917","DOIUrl":"https://doi.org/10.1145/3489849.3489917","url":null,"abstract":"With the emergence of multi-vehicular autonomous systems, such as AI controlled multiple fully autonomous vehicles, we need novel systems that provide tools for planning, executing, and reviewing of missions and keeping humans in the loop during all phases. We therefore present an immersive visualization system for interacting with these systems at a higher cognitive level than piloting of individual vehicles. Our system provides both desktop and VR modes for visual interaction with the robotic multi-vehicle AI system.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122094128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chihiro Asada, Kotori Tsutsumi, Yuichi Tamura, Naoya Hara, Wataru Omori, Y. Otsuka, Katsunari Sato
ACM Reference Format: Chihiro Asada, Kotori Tsutsumi, Yuichi Tamura, Naoya Hara, Wataru Omori, Yuta Otsuka, and Katsunari Sato. 2021. A sharing system for the annoyance of menstrual symptoms using electrical muscle stimulation and thermal stimulations. In 27th ACM Symposium on Virtual Reality Software and Technology (VRST ’21), December 08–10, 2021, Osaka, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3489849.3489937
{"title":"A sharing system for the annoyance of menstrual symptoms using electrical muscle stimulation and thermal stimulations","authors":"Chihiro Asada, Kotori Tsutsumi, Yuichi Tamura, Naoya Hara, Wataru Omori, Y. Otsuka, Katsunari Sato","doi":"10.1145/3489849.3489937","DOIUrl":"https://doi.org/10.1145/3489849.3489937","url":null,"abstract":"ACM Reference Format: Chihiro Asada, Kotori Tsutsumi, Yuichi Tamura, Naoya Hara, Wataru Omori, Yuta Otsuka, and Katsunari Sato. 2021. A sharing system for the annoyance of menstrual symptoms using electrical muscle stimulation and thermal stimulations. In 27th ACM Symposium on Virtual Reality Software and Technology (VRST ’21), December 08–10, 2021, Osaka, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3489849.3489937","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123817634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyi Liu, Gun A. Lee, Yi Li, Thammathip Piumsomboon, Barrett Ens
Navigation is a primary interaction in virtual reality. Previous research has explored different forms of artificial locomotion techniques for navigation, including hand gestures and body motions. However, few studies have investigated force-based foot gestures as a locomotion technique. We present three force-based foot gestures (Foot Fly, Foot Step and Foot Teleportation) for navigation in a virtual environment, relying on surface electromyography sensors readings from leg muscles. A pilot study comparing our techniques with controller-based techniques indicates that force-based foot gestures can provide a fun and engaging alternative. Of all six input techniques evaluated, Foot Fly was often most preferred despite requiring more exertion than the Controller Fly technique.
{"title":"Force-Based Foot Gesture Navigation in Virtual Reality","authors":"Siyi Liu, Gun A. Lee, Yi Li, Thammathip Piumsomboon, Barrett Ens","doi":"10.1145/3489849.3489945","DOIUrl":"https://doi.org/10.1145/3489849.3489945","url":null,"abstract":"Navigation is a primary interaction in virtual reality. Previous research has explored different forms of artificial locomotion techniques for navigation, including hand gestures and body motions. However, few studies have investigated force-based foot gestures as a locomotion technique. We present three force-based foot gestures (Foot Fly, Foot Step and Foot Teleportation) for navigation in a virtual environment, relying on surface electromyography sensors readings from leg muscles. A pilot study comparing our techniques with controller-based techniques indicates that force-based foot gestures can provide a fun and engaging alternative. Of all six input techniques evaluated, Foot Fly was often most preferred despite requiring more exertion than the Controller Fly technique.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124658768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}