Pub Date : 2023-02-05DOI: 10.1109/VR55154.2023.00076
Ripan Kumar Kundu, Rifatul Islam, J. Quarles, K. A. Hoque
Cybersickness is a common ailment associated with virtual reality (VR) user experiences. Several automated methods exist based on machine learning (ML) and deep learning (DL) to detect cyber-sickness. However, most of these cybersickness detection methods are perceived as computationally intensive and black-box methods. Thus, those techniques are neither trustworthy nor practical for deploying on standalone energy-constrained VR head-mounted devices (HMDs). In this work, we present an explainable artificial intelligence (XAI)-based framework Lite VR for cybersickness detection, explaining the model's outcome, reducing the feature dimensions, and overall computational costs. First, we develop three cybersick-ness DL models based on long-term short-term memory (LSTM), gated recurrent unit (GRU), and multilayer perceptron (MLP). Then, we employed a post-hoc explanation, such as SHapley Additive Explanations (SHAP), to explain the results and extract the most dominant features of cybersickness. Finally, we retrain the DL models with the reduced number of features. Our results show that eye-tracking features are the most dominant for cybersickness detection. Furthermore, based on the XAI-based feature ranking and dimensionality reduction, we significantly reduce the model's size by up to 4.3×, training time by up to 5.6×, and its inference time by up to 3.8×, with higher cybersickness detection accuracy and low regression error (i.e., on Fast Motion Scale (FMS)). Our proposed lite LSTM model obtained an accuracy of 94% in classifying cyber-sickness and regressing (i.e., FMS 1–10) with a Root Mean Square Error (RMSE) of 0.30, which outperforms the state-of-the-art. Our proposed Lite VR framework can help researchers and practitioners analyze, detect, and deploy their DL-based cybersickness detection models in standalone VR HMDs.
{"title":"LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI","authors":"Ripan Kumar Kundu, Rifatul Islam, J. Quarles, K. A. Hoque","doi":"10.1109/VR55154.2023.00076","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00076","url":null,"abstract":"Cybersickness is a common ailment associated with virtual reality (VR) user experiences. Several automated methods exist based on machine learning (ML) and deep learning (DL) to detect cyber-sickness. However, most of these cybersickness detection methods are perceived as computationally intensive and black-box methods. Thus, those techniques are neither trustworthy nor practical for deploying on standalone energy-constrained VR head-mounted devices (HMDs). In this work, we present an explainable artificial intelligence (XAI)-based framework Lite VR for cybersickness detection, explaining the model's outcome, reducing the feature dimensions, and overall computational costs. First, we develop three cybersick-ness DL models based on long-term short-term memory (LSTM), gated recurrent unit (GRU), and multilayer perceptron (MLP). Then, we employed a post-hoc explanation, such as SHapley Additive Explanations (SHAP), to explain the results and extract the most dominant features of cybersickness. Finally, we retrain the DL models with the reduced number of features. Our results show that eye-tracking features are the most dominant for cybersickness detection. Furthermore, based on the XAI-based feature ranking and dimensionality reduction, we significantly reduce the model's size by up to 4.3×, training time by up to 5.6×, and its inference time by up to 3.8×, with higher cybersickness detection accuracy and low regression error (i.e., on Fast Motion Scale (FMS)). Our proposed lite LSTM model obtained an accuracy of 94% in classifying cyber-sickness and regressing (i.e., FMS 1–10) with a Root Mean Square Error (RMSE) of 0.30, which outperforms the state-of-the-art. Our proposed Lite VR framework can help researchers and practitioners analyze, detect, and deploy their DL-based cybersickness detection models in standalone VR HMDs.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133344028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-03DOI: 10.1109/VR55154.2023.00054
Wai Tong, Meng Xia, Kamkwai Wong, D. Bowman, T. Pong, Huamin Qu, Yalong Yang
This paper provided empirical knowledge of the user experience for using collaborative visualization in a distributed asymmetrical setting through controlled user studies. With the ability to access various computing devices, such as Virtual Reality (VR) head-mounted displays, scenarios emerge when collaborators have to or prefer to use different computing environments in different places. However, we still lack an understanding of using VR in an asymmetric setting for collaborative visualization. To get an initial understanding and better inform the designs for asymmetric systems, we first conducted a formative study with 12 pairs of participants. All participants collaborated in asymmetric (PC-VR) and symmetric settings (PC-PC and VR-VR). We then improved our asymmetric design based on the key findings and observations from the first study. Another ten pairs of participants collaborated with enhanced PC-VR and PC-PC conditions in a follow-up study. We found that a well-designed asymmetric collaboration system could be as effective as a symmetric system. Surprisingly, participants using PC perceived less mental demand and effort in the asymmetric setting (PC-VR) compared to the symmetric setting (PC-PC). We provided fine-grained discussions about the trade-offs between different collaboration settings.
{"title":"Towards an Understanding of Distributed Asymmetric Collaborative Visualization on Problem-solving","authors":"Wai Tong, Meng Xia, Kamkwai Wong, D. Bowman, T. Pong, Huamin Qu, Yalong Yang","doi":"10.1109/VR55154.2023.00054","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00054","url":null,"abstract":"This paper provided empirical knowledge of the user experience for using collaborative visualization in a distributed asymmetrical setting through controlled user studies. With the ability to access various computing devices, such as Virtual Reality (VR) head-mounted displays, scenarios emerge when collaborators have to or prefer to use different computing environments in different places. However, we still lack an understanding of using VR in an asymmetric setting for collaborative visualization. To get an initial understanding and better inform the designs for asymmetric systems, we first conducted a formative study with 12 pairs of participants. All participants collaborated in asymmetric (PC-VR) and symmetric settings (PC-PC and VR-VR). We then improved our asymmetric design based on the key findings and observations from the first study. Another ten pairs of participants collaborated with enhanced PC-VR and PC-PC conditions in a follow-up study. We found that a well-designed asymmetric collaboration system could be as effective as a symmetric system. Surprisingly, participants using PC perceived less mental demand and effort in the asymmetric setting (PC-VR) compared to the symmetric setting (PC-PC). We provided fine-grained discussions about the trade-offs between different collaboration settings.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132201459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1109/VR55154.2023.00034
R. Guarese, Emma Pretty, Haytham M. Fayek, Fabio Zambetta, R. V. Schyndel
To promote empathy with people that have disabilities, we propose a multi-sensory interactive experience that allows sighted users to embody having a visual impairment whilst using assistive technologies. The experiment involves blindfolded sighted participants interacting with a variety of sonification methods in order to locate targets and place objects in a real kitchen environment. Prior to the tests, we enquired about the perceived benefits of increasing said empathy from the blind and visually impaired (BVI) community. To test empathy, we adapted an Empathy and Sympathy Response scale to gather sighted people's self-reported and perceived empathy with the BVI community from both sighted (N = 77) and BVI people (N = 20) respectively. We re-tested sighted people's empathy after the experiment and found that their empathetic and sympathetic responses (N = 15) significantly increased. Furthermore, survey results suggest that the BVI community believes the use of these empathy-evoking embodied experiences may lead to the development of new assistive technologies.
{"title":"Evoking empathy with visually impaired people through an augmented reality embodiment experience","authors":"R. Guarese, Emma Pretty, Haytham M. Fayek, Fabio Zambetta, R. V. Schyndel","doi":"10.1109/VR55154.2023.00034","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00034","url":null,"abstract":"To promote empathy with people that have disabilities, we propose a multi-sensory interactive experience that allows sighted users to embody having a visual impairment whilst using assistive technologies. The experiment involves blindfolded sighted participants interacting with a variety of sonification methods in order to locate targets and place objects in a real kitchen environment. Prior to the tests, we enquired about the perceived benefits of increasing said empathy from the blind and visually impaired (BVI) community. To test empathy, we adapted an Empathy and Sympathy Response scale to gather sighted people's self-reported and perceived empathy with the BVI community from both sighted (N = 77) and BVI people (N = 20) respectively. We re-tested sighted people's empathy after the experiment and found that their empathetic and sympathetic responses (N = 15) significantly increased. Furthermore, survey results suggest that the BVI community believes the use of these empathy-evoking embodied experiences may lead to the development of new assistive technologies.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124380672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-26DOI: 10.1109/VR55154.2023.00084
Matthias Albrecht, Lorenz Asslander, Harald Reiterer, S. Streuber
Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.
{"title":"MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research","authors":"Matthias Albrecht, Lorenz Asslander, Harald Reiterer, S. Streuber","doi":"10.1109/VR55154.2023.00084","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00084","url":null,"abstract":"Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123231652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-08DOI: 10.1109/VR55154.2023.00073
K. Akşit, Yuta Itoh
An emerging alternative to conventional Augmented Reality (AR) glasses designs, Beaming displays promise slim AR glasses free from challenging design trade-offs, including battery-related limits or computational budget-related issues. These beaming displays remove active components such as batteries and electronics from AR glasses and move them to a projector that projects images to a user from a distance (1–2 meters), where users wear only passive optical eyepieces. However, earlier implementations of these displays delivered poor resolutions (7 cycles per degree) without any optical focus cues and were introduced with a bulky form-factor eyepiece ($sim 50 mm$ thick). This paper introduces a new milestone for beaming displays, which we call HoloBeam. In this new design, a custom holographic projector populates a micro-volume located at some distance (1–2 meters) with multiple planes of images. Users view magnified copies of these images from this small volume with the help of an eyepiece that is either a Holographic Optical Element (HOE) or a set of lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date with submillimeter thickness (e.g., HOE film is only $120 mu m$ thick). In addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles per degree) with a 70 degrees-wide field of view.
作为传统增强现实(AR)眼镜设计的一种新兴替代方案,Beaming显示器有望实现纤薄的AR眼镜,避免具有挑战性的设计权衡,包括电池相关限制或计算预算相关问题。这些光束显示器将电池和电子设备等有源组件从AR眼镜上移除,并将其移动到投影仪上,从距离(1-2米)向用户投射图像,用户只需佩戴无源光学目镜。然而,这些显示器的早期实现在没有任何光学聚焦提示的情况下提供了较差的分辨率(每度7个周期),并且引入了笨重的目镜($sim 50 mm$厚)。本文介绍了一种新的里程碑式的光束显示技术,我们称之为HoloBeam。在这个新设计中,一个定制的全息投影仪将多个图像平面放置在距离一定距离(1-2米)的微体上。用户通过一个目镜(全息光学元件(HOE)或一组透镜)从这个小体积中查看这些图像的放大副本。我们的HoloBeam原型展示了迄今为止最薄的AR眼镜,厚度为亚毫米(例如,HOE膜厚度仅为$120 mu m$)。此外,HoloBeam原型展示了接近视网膜的分辨率(每度24个周期),视野范围为70度。
{"title":"HoloBeam: Paper-Thin Near-Eye Displays","authors":"K. Akşit, Yuta Itoh","doi":"10.1109/VR55154.2023.00073","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00073","url":null,"abstract":"An emerging alternative to conventional Augmented Reality (AR) glasses designs, Beaming displays promise slim AR glasses free from challenging design trade-offs, including battery-related limits or computational budget-related issues. These beaming displays remove active components such as batteries and electronics from AR glasses and move them to a projector that projects images to a user from a distance (1–2 meters), where users wear only passive optical eyepieces. However, earlier implementations of these displays delivered poor resolutions (7 cycles per degree) without any optical focus cues and were introduced with a bulky form-factor eyepiece ($sim 50 mm$ thick). This paper introduces a new milestone for beaming displays, which we call HoloBeam. In this new design, a custom holographic projector populates a micro-volume located at some distance (1–2 meters) with multiple planes of images. Users view magnified copies of these images from this small volume with the help of an eyepiece that is either a Holographic Optical Element (HOE) or a set of lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date with submillimeter thickness (e.g., HOE film is only $120 mu m$ thick). In addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles per degree) with a 70 degrees-wide field of view.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131420109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-14DOI: 10.1109/VR55154.2023.00057
Koray Kavaklı, Yuta Itoh, H. Urey, K. Akşit
This paper introduces a new multiplane CGH computation method to reconstruct artifact-free high-quality holograms with natural-looking defocus blur. Our method introduces a new targeting scheme and a new loss function. While the targeting scheme accounts for defocused parts of the scene at each depth plane, the new loss function analyzes focused and defocused parts separately in reconstructed images. Our method support phase-only CGH calculations using various iterative (e.g., Gerchberg-Saxton, Gradient Descent) and non-iterative (e.g., Double Phase) CGH techniques. We achieve our best image quality using a modified gradient descent-based optimization recipe where we introduce a constraint inspired by the double phase method. We validate our method experimentally using our proof-of-concept holographic display, comparing various algorithms, including multi-depth scenes with sparse and dense contents.
{"title":"Realistic Defocus Blur for Multiplane Computer-Generated Holography","authors":"Koray Kavaklı, Yuta Itoh, H. Urey, K. Akşit","doi":"10.1109/VR55154.2023.00057","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00057","url":null,"abstract":"This paper introduces a new multiplane CGH computation method to reconstruct artifact-free high-quality holograms with natural-looking defocus blur. Our method introduces a new targeting scheme and a new loss function. While the targeting scheme accounts for defocused parts of the scene at each depth plane, the new loss function analyzes focused and defocused parts separately in reconstructed images. Our method support phase-only CGH calculations using various iterative (e.g., Gerchberg-Saxton, Gradient Descent) and non-iterative (e.g., Double Phase) CGH techniques. We achieve our best image quality using a modified gradient descent-based optimization recipe where we introduce a constraint inspired by the double phase method. We validate our method experimentally using our proof-of-concept holographic display, comparing various algorithms, including multi-depth scenes with sparse and dense contents.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114238926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}