Humans have tactile sensory organs distributed all over the body. However, haptic devices are often only created for one part (e.g., hands, wrist, or face). We propose PneuMod, a wearable modular haptic device that can simultaneously and independently present pressure and thermal (warm and cold) cues to different parts of the body. The module in PneuMod is a pneumatically-actuated silicone bubble with an integrated Peltier device that can render thermo-pneumatic feedback through shapes, locations, patterns, and motion effects. The modules can be arranged with varying resolutions on fabric to create sleeves, headbands, leg wraps, and other forms that can be worn on multiple parts of the body. In this paper, we describe the system design, the module implementation, and applications for social touch interactions and in-game thermal and pressure feedback.
{"title":"PneuMod: A Modular Haptic Device with Localized Pressure and Thermal Feedback","authors":"Bowen Zhang, Misha Sra","doi":"10.1145/3489849.3489857","DOIUrl":"https://doi.org/10.1145/3489849.3489857","url":null,"abstract":"Humans have tactile sensory organs distributed all over the body. However, haptic devices are often only created for one part (e.g., hands, wrist, or face). We propose PneuMod, a wearable modular haptic device that can simultaneously and independently present pressure and thermal (warm and cold) cues to different parts of the body. The module in PneuMod is a pneumatically-actuated silicone bubble with an integrated Peltier device that can render thermo-pneumatic feedback through shapes, locations, patterns, and motion effects. The modules can be arranged with varying resolutions on fabric to create sleeves, headbands, leg wraps, and other forms that can be worn on multiple parts of the body. In this paper, we describe the system design, the module implementation, and applications for social touch interactions and in-game thermal and pressure feedback.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"2019 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129207771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-speed finger tracking is necessary for augmented reality and operation in human-machine cooperation without latency discomfort, but conventional markerless finger tracking methods are not fast enough and the marker-based methods have low wearability. In this paper, we propose an ellipses ring marker (ERM), a finger-ring marker consisting of multiple ellipses and its high-speed image recognition algorithm. The finger-ring shape has highly wearing continuity, and the surface shape is suitable for various viewing angle observation. The invariance of the ellipse in the perspective projection enables accurate and low-latency posture estimation. We have experimentally investigated the advantage in normal distribution, validated the sufficient accuracy and computational cost in the marker tracking, and showed a demonstration of dynamic projection mapping on a palm.
{"title":"Ellipses Ring Marker for High-speed Finger Tracking","authors":"Tomohiro Sueishi, M. Ishikawa","doi":"10.1145/3489849.3489856","DOIUrl":"https://doi.org/10.1145/3489849.3489856","url":null,"abstract":"High-speed finger tracking is necessary for augmented reality and operation in human-machine cooperation without latency discomfort, but conventional markerless finger tracking methods are not fast enough and the marker-based methods have low wearability. In this paper, we propose an ellipses ring marker (ERM), a finger-ring marker consisting of multiple ellipses and its high-speed image recognition algorithm. The finger-ring shape has highly wearing continuity, and the surface shape is suitable for various viewing angle observation. The invariance of the ellipse in the perspective projection enables accurate and low-latency posture estimation. We have experimentally investigated the advantage in normal distribution, validated the sufficient accuracy and computational cost in the marker tracking, and showed a demonstration of dynamic projection mapping on a palm.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130134233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lior Somin, Zachary Mckendrick, Patrick Finn, E. Sharlin
BreachMob is a virtual reality (VR) tool that applies open design principles from information security to physical buildings and structures. BreachMob uses a detailed 3D digital model of a property owner's building. The model is then published as a virtual environment (VE), complete with all applicable security measures and released to the public to test the building's security and find any potential vulnerabilities by completing specified objectives. Our paper contributes a new method of applying VR to crowd source detection of physical environment vulnerabilities. We detail the technical realization of two BreachMob prototypes (a home and an airport) reflecting on static and dynamic vulnerabilities. Our design critique suggests that BreachMob promotes user immersion by allowing participants the freedom to behave in ways that align with the experience of breaching physical security protocols.
{"title":"BreachMob: Detecting Vulnerabilities in Physical Environments Using Virtual Reality","authors":"Lior Somin, Zachary Mckendrick, Patrick Finn, E. Sharlin","doi":"10.1145/3489849.3489883","DOIUrl":"https://doi.org/10.1145/3489849.3489883","url":null,"abstract":"BreachMob is a virtual reality (VR) tool that applies open design principles from information security to physical buildings and structures. BreachMob uses a detailed 3D digital model of a property owner's building. The model is then published as a virtual environment (VE), complete with all applicable security measures and released to the public to test the building's security and find any potential vulnerabilities by completing specified objectives. Our paper contributes a new method of applying VR to crowd source detection of physical environment vulnerabilities. We detail the technical realization of two BreachMob prototypes (a home and an airport) reflecting on static and dynamic vulnerabilities. Our design critique suggests that BreachMob promotes user immersion by allowing participants the freedom to behave in ways that align with the experience of breaching physical security protocols.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128369779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreea-Dalia Blaga, Maite Frutos Pascual, C. Creed, Ian Williams
Object categorisation methods have been historically used in literature for understanding and collecting real objects together into meaningful groups and can be used to define human interaction patterns (i. e grasping). When investigating grasping patterns for Virtual Reality (VR), researchers used Zingg’s methodology which categorises objects based on shape and form. However, this methodology is limited and does not take into consideration other object attributes that might influence grasping interaction in VR. To address this, our work presents a study into three categorisation methods for virtual objects. We employ Zingg’s object categorisation as a benchmark against existing real and virtual object interaction work and introduce two new categorisation methods that focus on virtual object equilibrium and virtual object component parts. We evaluate these categorisation methods using a dataset of 1872 grasps from a VR docking task on 16 virtual representations of real objects and report findings on grasp patterns. We report on findings for each virtual object categorisation method showing differences in terms of grasp classes, grasp type and aperture. We conclude by detailing recommendations and future ideas on how these categorisation methods can be taken forward to inform a richer understanding of grasping in VR.
{"title":"Virtual Object Categorisation Methods: Towards a Richer Understanding of Object Grasping for Virtual Reality","authors":"Andreea-Dalia Blaga, Maite Frutos Pascual, C. Creed, Ian Williams","doi":"10.1145/3489849.3489875","DOIUrl":"https://doi.org/10.1145/3489849.3489875","url":null,"abstract":"Object categorisation methods have been historically used in literature for understanding and collecting real objects together into meaningful groups and can be used to define human interaction patterns (i. e grasping). When investigating grasping patterns for Virtual Reality (VR), researchers used Zingg’s methodology which categorises objects based on shape and form. However, this methodology is limited and does not take into consideration other object attributes that might influence grasping interaction in VR. To address this, our work presents a study into three categorisation methods for virtual objects. We employ Zingg’s object categorisation as a benchmark against existing real and virtual object interaction work and introduce two new categorisation methods that focus on virtual object equilibrium and virtual object component parts. We evaluate these categorisation methods using a dataset of 1872 grasps from a VR docking task on 16 virtual representations of real objects and report findings on grasp patterns. We report on findings for each virtual object categorisation method showing differences in terms of grasp classes, grasp type and aperture. We conclude by detailing recommendations and future ideas on how these categorisation methods can be taken forward to inform a richer understanding of grasping in VR.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134056245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunwoo Cho, Sangheon Park, Chanho Park, Sung-Uk Jung
Recently, many attempts have been made to apply real-time simultaneous localization and mapping (SLAM) technology to augmented reality (AR) applications. Such AR systems based on SLAM technology are generally implemented by augmenting virtual objects onto a diorama or three-dimensional sculpture. However, a new SLAM map needs to be generated if the space or lighting where the diorama is installed changes. This leads to the problem of updating the coordinate system each time a new SLAM map is generated. Updates to the coordinate system signify that the positions of the virtual objects placed in the AR space change as well. Therefore, we proposed a SLAM map regeneration technique in which the existing coordinate system is maintained even if a new map is generated.
{"title":"Efficient Mapping Technique under Various Spatial Changes for SLAM-based AR Services","authors":"Hyunwoo Cho, Sangheon Park, Chanho Park, Sung-Uk Jung","doi":"10.1145/3489849.3489916","DOIUrl":"https://doi.org/10.1145/3489849.3489916","url":null,"abstract":"Recently, many attempts have been made to apply real-time simultaneous localization and mapping (SLAM) technology to augmented reality (AR) applications. Such AR systems based on SLAM technology are generally implemented by augmenting virtual objects onto a diorama or three-dimensional sculpture. However, a new SLAM map needs to be generated if the space or lighting where the diorama is installed changes. This leads to the problem of updating the coordinate system each time a new SLAM map is generated. Updates to the coordinate system signify that the positions of the virtual objects placed in the AR space change as well. Therefore, we proposed a SLAM map regeneration technique in which the existing coordinate system is maintained even if a new map is generated.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125062584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Auda, Nils Verheyen, Sven Mayer, Stefan Schneegass
Virtual Reality (VR) has made its way into everyday life. While VR delivers an ever-increasing level of immersion, controls and their haptics are still limited. Current VR headsets come with dedicated controllers that are used to control every virtual interface element. However, the controller input mostly differs from the virtual interface. This reduces immersion. To provide a more realistic input, we present Flyables, a toolkit that provides matching haptics for virtual user interface elements using quadcopters. We took five common virtual UI elements and built their physical counterparts. We attached them to quadcopters to deliver on-demand haptic feedback. In a user study, we compared Flyables to controller-based VR input. While controllers still outperform Flyables in terms of precision and task completion time, we found that Flyables present a more natural and playful way to interact with VR environments. Based on the results from the study, we outline research challenges that could improve interaction with Flyables in the future.
{"title":"Flyables: Haptic Input Devices for Virtual Reality using Quadcopters","authors":"Jonas Auda, Nils Verheyen, Sven Mayer, Stefan Schneegass","doi":"10.1145/3489849.3489855","DOIUrl":"https://doi.org/10.1145/3489849.3489855","url":null,"abstract":"Virtual Reality (VR) has made its way into everyday life. While VR delivers an ever-increasing level of immersion, controls and their haptics are still limited. Current VR headsets come with dedicated controllers that are used to control every virtual interface element. However, the controller input mostly differs from the virtual interface. This reduces immersion. To provide a more realistic input, we present Flyables, a toolkit that provides matching haptics for virtual user interface elements using quadcopters. We took five common virtual UI elements and built their physical counterparts. We attached them to quadcopters to deliver on-demand haptic feedback. In a user study, we compared Flyables to controller-based VR input. While controllers still outperform Flyables in terms of precision and task completion time, we found that Flyables present a more natural and playful way to interact with VR environments. Based on the results from the study, we outline research challenges that could improve interaction with Flyables in the future.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122516379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The relationship between vection (illusory self-motion) and cybersickness is complex. This pilot study examined whether only unexpected vection provokes sickness during head-mounted display (HMD) based virtual reality (VR). 20 participants ran through the tutorial of Mission: ISS (an HMD VR app) until they experienced notable sickness (maximum exposure was 15 minutes). We found that: 1) cybersickness was positively related to vection strength; and 2) cybersickness appeared to be more likely to occur during unexpected vection. Given the implications of these findings, future studies should attempt to replicate them and confirm the unexpected vection hypothesis with larger sample sizes and rigorous experimental designs.
{"title":"A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness.","authors":"J. Teixeira, Sebastien Miellet, S. Palmisano","doi":"10.1145/3489849.3489895","DOIUrl":"https://doi.org/10.1145/3489849.3489895","url":null,"abstract":"The relationship between vection (illusory self-motion) and cybersickness is complex. This pilot study examined whether only unexpected vection provokes sickness during head-mounted display (HMD) based virtual reality (VR). 20 participants ran through the tutorial of Mission: ISS (an HMD VR app) until they experienced notable sickness (maximum exposure was 15 minutes). We found that: 1) cybersickness was positively related to vection strength; and 2) cybersickness appeared to be more likely to occur during unexpected vection. Given the implications of these findings, future studies should attempt to replicate them and confirm the unexpected vection hypothesis with larger sample sizes and rigorous experimental designs.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115763139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johan Winther Kristensen, Allan Schjørring, Alex Mikkelsen, Daniel Agerholm Johansen, H. Knoche
Researchers have come up with many visual cues that can guide Virtual (VR) and Augmented Reality (AR) users to out of view objects. The paper provides a classification of cues and tasks and visual model to describe and analyse cues to support their design.
{"title":"Of Leaders and Directors: A visual model to describe and analyse persistent visual cues directing to single out-of view targets","authors":"Johan Winther Kristensen, Allan Schjørring, Alex Mikkelsen, Daniel Agerholm Johansen, H. Knoche","doi":"10.1145/3489849.3489953","DOIUrl":"https://doi.org/10.1145/3489849.3489953","url":null,"abstract":"Researchers have come up with many visual cues that can guide Virtual (VR) and Augmented Reality (AR) users to out of view objects. The paper provides a classification of cues and tasks and visual model to describe and analyse cues to support their design.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"285 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115009914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the technological advances in building design, visualizing 3D building layouts can be especially difficult for novice and expert users alike, who must take into account design constraints including line-of-sight and visibility. Using CADwalk, a commercial building design tool that utilizes floor-facing projectors to show 1:1 scale building plans, this work presents and evaluates two floor-based visual cues for assisting with evaluating line-of-sight and visibility. Additionally, we examine the impact of using virtual cameras looking from the inside-out (from user’s location to objects of interest) and outside-in (looking from an object of interest’s location back towards the user). Results show that floor-based cues led to participants more correctly rating visibility, despite taking longer to complete the task. This is an effective tradeoff, given the final outcome (the building design) where accuracy is paramount.
{"title":"Spatial Augmented Reality Visibility and Line-of-Sight Cues for Building Design","authors":"James A. Walsh, James Baumeister, B. Thomas","doi":"10.1145/3489849.3489868","DOIUrl":"https://doi.org/10.1145/3489849.3489868","url":null,"abstract":"Despite the technological advances in building design, visualizing 3D building layouts can be especially difficult for novice and expert users alike, who must take into account design constraints including line-of-sight and visibility. Using CADwalk, a commercial building design tool that utilizes floor-facing projectors to show 1:1 scale building plans, this work presents and evaluates two floor-based visual cues for assisting with evaluating line-of-sight and visibility. Additionally, we examine the impact of using virtual cameras looking from the inside-out (from user’s location to objects of interest) and outside-in (looking from an object of interest’s location back towards the user). Results show that floor-based cues led to participants more correctly rating visibility, despite taking longer to complete the task. This is an effective tradeoff, given the final outcome (the building design) where accuracy is paramount.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127178651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farshid Salemi Parizi, W. Kienzle, Eric Whitmire, Aakar Gupta, Hrvoje Benko
We introduce RotoWrist, an infrared (IR) light based solution for continuously and reliably tracking 2-degree-of-freedom (DoF) relative angle of the wrist with respect to the forearm using a wristband. The tracking system consists of eight time-of-flight (ToF) IR light modules distributed around a wristband. We developed a computationally simple tracking approach to reconstruct the orientation of the wrist without any runtime training, ensuring user independence. An evaluation study demonstrated that RotoWrist achieves a cross-user median tracking error of 5.9° in flexion/extension and 6.8° in radial and ulnar deviation with no calibration required as measured with optical ground truth. We further demonstrate the performance of RotoWrist for a pointing task and compare it against ground truth tracking.
{"title":"RotoWrist: Continuous Infrared Wrist Angle Tracking using a Wristband","authors":"Farshid Salemi Parizi, W. Kienzle, Eric Whitmire, Aakar Gupta, Hrvoje Benko","doi":"10.1145/3489849.3489886","DOIUrl":"https://doi.org/10.1145/3489849.3489886","url":null,"abstract":"We introduce RotoWrist, an infrared (IR) light based solution for continuously and reliably tracking 2-degree-of-freedom (DoF) relative angle of the wrist with respect to the forearm using a wristband. The tracking system consists of eight time-of-flight (ToF) IR light modules distributed around a wristband. We developed a computationally simple tracking approach to reconstruct the orientation of the wrist without any runtime training, ensuring user independence. An evaluation study demonstrated that RotoWrist achieves a cross-user median tracking error of 5.9° in flexion/extension and 6.8° in radial and ulnar deviation with no calibration required as measured with optical ground truth. We further demonstrate the performance of RotoWrist for a pointing task and compare it against ground truth tracking.","PeriodicalId":345527,"journal":{"name":"Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123948674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}