首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI LiteVR:使用可解释的AI进行可解释和轻量级的晕机检测
Pub Date : 2023-02-05 DOI: 10.1109/VR55154.2023.00076
Ripan Kumar Kundu, Rifatul Islam, J. Quarles, K. A. Hoque
Cybersickness is a common ailment associated with virtual reality (VR) user experiences. Several automated methods exist based on machine learning (ML) and deep learning (DL) to detect cyber-sickness. However, most of these cybersickness detection methods are perceived as computationally intensive and black-box methods. Thus, those techniques are neither trustworthy nor practical for deploying on standalone energy-constrained VR head-mounted devices (HMDs). In this work, we present an explainable artificial intelligence (XAI)-based framework Lite VR for cybersickness detection, explaining the model's outcome, reducing the feature dimensions, and overall computational costs. First, we develop three cybersick-ness DL models based on long-term short-term memory (LSTM), gated recurrent unit (GRU), and multilayer perceptron (MLP). Then, we employed a post-hoc explanation, such as SHapley Additive Explanations (SHAP), to explain the results and extract the most dominant features of cybersickness. Finally, we retrain the DL models with the reduced number of features. Our results show that eye-tracking features are the most dominant for cybersickness detection. Furthermore, based on the XAI-based feature ranking and dimensionality reduction, we significantly reduce the model's size by up to 4.3×, training time by up to 5.6×, and its inference time by up to 3.8×, with higher cybersickness detection accuracy and low regression error (i.e., on Fast Motion Scale (FMS)). Our proposed lite LSTM model obtained an accuracy of 94% in classifying cyber-sickness and regressing (i.e., FMS 1–10) with a Root Mean Square Error (RMSE) of 0.30, which outperforms the state-of-the-art. Our proposed Lite VR framework can help researchers and practitioners analyze, detect, and deploy their DL-based cybersickness detection models in standalone VR HMDs.
晕屏是一种与虚拟现实(VR)用户体验相关的常见疾病。有几种基于机器学习(ML)和深度学习(DL)的自动化方法可以检测网络疾病。然而,大多数这些晕动病检测方法被认为是计算密集型和黑盒方法。因此,这些技术对于部署在独立的能量受限的VR头戴式设备(hmd)上既不可靠也不实用。在这项工作中,我们提出了一个可解释的基于人工智能(XAI)的晕动病检测框架Lite VR,解释了模型的结果,降低了特征维度,降低了总体计算成本。首先,我们建立了三个基于长短期记忆(LSTM)、门控循环单元(GRU)和多层感知器(MLP)的晕屏深度学习模型。然后,我们采用了一种事后解释,如SHapley加性解释(SHAP)来解释结果,并提取出晕屏病的最主要特征。最后,我们用减少的特征数重新训练DL模型。我们的研究结果表明,眼动追踪特征是检测晕动症的最主要特征。此外,基于xai的特征排序和降维,我们将模型的大小减少了4.3倍,训练时间减少了5.6倍,推理时间减少了3.8倍,具有更高的晕动检测精度和更低的回归误差(即快速运动尺度(FMS))。我们提出的生活LSTM模型在分类网络疾病和回归(即FMS 1-10)方面获得了94%的准确率,均方根误差(RMSE)为0.30,优于最先进的技术。我们提出的Lite VR框架可以帮助研究人员和从业者分析、检测和部署他们基于dl的晕动病检测模型。
{"title":"LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI","authors":"Ripan Kumar Kundu, Rifatul Islam, J. Quarles, K. A. Hoque","doi":"10.1109/VR55154.2023.00076","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00076","url":null,"abstract":"Cybersickness is a common ailment associated with virtual reality (VR) user experiences. Several automated methods exist based on machine learning (ML) and deep learning (DL) to detect cyber-sickness. However, most of these cybersickness detection methods are perceived as computationally intensive and black-box methods. Thus, those techniques are neither trustworthy nor practical for deploying on standalone energy-constrained VR head-mounted devices (HMDs). In this work, we present an explainable artificial intelligence (XAI)-based framework Lite VR for cybersickness detection, explaining the model's outcome, reducing the feature dimensions, and overall computational costs. First, we develop three cybersick-ness DL models based on long-term short-term memory (LSTM), gated recurrent unit (GRU), and multilayer perceptron (MLP). Then, we employed a post-hoc explanation, such as SHapley Additive Explanations (SHAP), to explain the results and extract the most dominant features of cybersickness. Finally, we retrain the DL models with the reduced number of features. Our results show that eye-tracking features are the most dominant for cybersickness detection. Furthermore, based on the XAI-based feature ranking and dimensionality reduction, we significantly reduce the model's size by up to 4.3×, training time by up to 5.6×, and its inference time by up to 3.8×, with higher cybersickness detection accuracy and low regression error (i.e., on Fast Motion Scale (FMS)). Our proposed lite LSTM model obtained an accuracy of 94% in classifying cyber-sickness and regressing (i.e., FMS 1–10) with a Root Mean Square Error (RMSE) of 0.30, which outperforms the state-of-the-art. Our proposed Lite VR framework can help researchers and practitioners analyze, detect, and deploy their DL-based cybersickness detection models in standalone VR HMDs.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133344028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards an Understanding of Distributed Asymmetric Collaborative Visualization on Problem-solving 对分布式非对称协作可视化解决问题的理解
Pub Date : 2023-02-03 DOI: 10.1109/VR55154.2023.00054
Wai Tong, Meng Xia, Kamkwai Wong, D. Bowman, T. Pong, Huamin Qu, Yalong Yang
This paper provided empirical knowledge of the user experience for using collaborative visualization in a distributed asymmetrical setting through controlled user studies. With the ability to access various computing devices, such as Virtual Reality (VR) head-mounted displays, scenarios emerge when collaborators have to or prefer to use different computing environments in different places. However, we still lack an understanding of using VR in an asymmetric setting for collaborative visualization. To get an initial understanding and better inform the designs for asymmetric systems, we first conducted a formative study with 12 pairs of participants. All participants collaborated in asymmetric (PC-VR) and symmetric settings (PC-PC and VR-VR). We then improved our asymmetric design based on the key findings and observations from the first study. Another ten pairs of participants collaborated with enhanced PC-VR and PC-PC conditions in a follow-up study. We found that a well-designed asymmetric collaboration system could be as effective as a symmetric system. Surprisingly, participants using PC perceived less mental demand and effort in the asymmetric setting (PC-VR) compared to the symmetric setting (PC-PC). We provided fine-grained discussions about the trade-offs between different collaboration settings.
本文通过控制用户研究,为在分布式不对称设置中使用协作可视化提供了用户体验的经验知识。由于能够访问各种计算设备,例如虚拟现实(VR)头戴式显示器,当协作者不得不或更喜欢在不同的地方使用不同的计算环境时,场景就会出现。然而,我们仍然缺乏对在非对称环境中使用VR进行协作可视化的理解。为了获得对不对称系统的初步理解和更好的设计信息,我们首先对12对参与者进行了形成性研究。所有参与者在不对称(PC-VR)和对称(PC-PC和VR-VR)环境下进行合作。然后,我们根据第一项研究的主要发现和观察结果改进了我们的不对称设计。在一项后续研究中,另外十对参与者在增强的PC-VR和PC-PC条件下进行合作。我们发现,设计良好的非对称协作系统可以和对称协作系统一样有效。令人惊讶的是,使用PC的参与者在不对称环境(PC- vr)中感知到的心理需求和努力比对称环境(PC-PC)要少。我们提供了关于不同协作设置之间权衡的细粒度讨论。
{"title":"Towards an Understanding of Distributed Asymmetric Collaborative Visualization on Problem-solving","authors":"Wai Tong, Meng Xia, Kamkwai Wong, D. Bowman, T. Pong, Huamin Qu, Yalong Yang","doi":"10.1109/VR55154.2023.00054","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00054","url":null,"abstract":"This paper provided empirical knowledge of the user experience for using collaborative visualization in a distributed asymmetrical setting through controlled user studies. With the ability to access various computing devices, such as Virtual Reality (VR) head-mounted displays, scenarios emerge when collaborators have to or prefer to use different computing environments in different places. However, we still lack an understanding of using VR in an asymmetric setting for collaborative visualization. To get an initial understanding and better inform the designs for asymmetric systems, we first conducted a formative study with 12 pairs of participants. All participants collaborated in asymmetric (PC-VR) and symmetric settings (PC-PC and VR-VR). We then improved our asymmetric design based on the key findings and observations from the first study. Another ten pairs of participants collaborated with enhanced PC-VR and PC-PC conditions in a follow-up study. We found that a well-designed asymmetric collaboration system could be as effective as a symmetric system. Surprisingly, participants using PC perceived less mental demand and effort in the asymmetric setting (PC-VR) compared to the symmetric setting (PC-PC). We provided fine-grained discussions about the trade-offs between different collaboration settings.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132201459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evoking empathy with visually impaired people through an augmented reality embodiment experience 通过增强现实化身体验唤起对视障人士的同情
Pub Date : 2023-02-01 DOI: 10.1109/VR55154.2023.00034
R. Guarese, Emma Pretty, Haytham M. Fayek, Fabio Zambetta, R. V. Schyndel
To promote empathy with people that have disabilities, we propose a multi-sensory interactive experience that allows sighted users to embody having a visual impairment whilst using assistive technologies. The experiment involves blindfolded sighted participants interacting with a variety of sonification methods in order to locate targets and place objects in a real kitchen environment. Prior to the tests, we enquired about the perceived benefits of increasing said empathy from the blind and visually impaired (BVI) community. To test empathy, we adapted an Empathy and Sympathy Response scale to gather sighted people's self-reported and perceived empathy with the BVI community from both sighted (N = 77) and BVI people (N = 20) respectively. We re-tested sighted people's empathy after the experiment and found that their empathetic and sympathetic responses (N = 15) significantly increased. Furthermore, survey results suggest that the BVI community believes the use of these empathy-evoking embodied experiences may lead to the development of new assistive technologies.
为了促进对残疾人的同情,我们提出了一种多感官互动体验,允许视力正常的用户在使用辅助技术时体现视力障碍。这个实验包括蒙上眼睛的有视力的参与者与各种各样的声音方法互动,以便在真实的厨房环境中定位目标和放置物体。在测试之前,我们询问了盲人和视障人士(BVI)群体增加上述同理心的感知益处。为了测试共情,我们采用共情和同情反应量表分别收集视力正常者(N = 77)和英属维尔京群岛人(N = 20)对英属维尔京群岛社区的自我报告和感知共情。实验结束后,我们重新测试了视力正常的人的同理心,发现他们的同理心和同情反应(N = 15)显著增加。此外,调查结果表明,英属维尔京群岛社区认为,使用这些唤起共情的具身体验可能会导致新的辅助技术的发展。
{"title":"Evoking empathy with visually impaired people through an augmented reality embodiment experience","authors":"R. Guarese, Emma Pretty, Haytham M. Fayek, Fabio Zambetta, R. V. Schyndel","doi":"10.1109/VR55154.2023.00034","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00034","url":null,"abstract":"To promote empathy with people that have disabilities, we propose a multi-sensory interactive experience that allows sighted users to embody having a visual impairment whilst using assistive technologies. The experiment involves blindfolded sighted participants interacting with a variety of sonification methods in order to locate targets and place objects in a real kitchen environment. Prior to the tests, we enquired about the perceived benefits of increasing said empathy from the blind and visually impaired (BVI) community. To test empathy, we adapted an Empathy and Sympathy Response scale to gather sighted people's self-reported and perceived empathy with the BVI community from both sighted (N = 77) and BVI people (N = 20) respectively. We re-tested sighted people's empathy after the experiment and found that their empathetic and sympathetic responses (N = 15) significantly increased. Furthermore, survey results suggest that the BVI community believes the use of these empathy-evoking embodied experiences may lead to the development of new assistive technologies.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124380672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research MoPeDT:一个模块化头戴式显示工具包,用于进行周边视觉研究
Pub Date : 2023-01-26 DOI: 10.1109/VR55154.2023.00084
Matthias Albrecht, Lorenz Asslander, Harald Reiterer, S. Streuber
Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.
周边视觉在人类的感知和定位中起着重要的作用。然而,它与人机交互的相关性,特别是头戴式显示器,还没有得到充分的探索。过去,开发了一些专门的设备来在外围显示视觉提示,每个设备仅为单个特定用例设计。专门增强周边视觉的多用途耳机还不存在。我们推出MoPeDT:模块化周边显示工具包,这是一个免费的、灵活的、可重构的、可扩展的耳机,用于进行周边视觉研究。MoPeDT可以用3D打印机和现成的组件来制造。它具有多个空间可配置的近眼显示模块和实验室内外的全3D跟踪。有了我们的系统,研究人员和设计师可以很容易地开发和原型新的周边视觉交互和可视化技术。我们展示了我们的耳机的多功能性与几个可能的应用空间意识,平衡,互动,反馈和通知。我们进行了一项小型研究来评估该系统的可用性。我们发现,参与者基本上不会被外围信号激怒,但耳机的舒适度还有待进一步提高。我们还基于已建立的人机交互工具包的启发式评估了我们的系统,以展示MoPeDT如何适应不断变化的需求,降低周边视觉研究的进入门槛,并促进模块化构建块组合的表达能力。
{"title":"MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research","authors":"Matthias Albrecht, Lorenz Asslander, Harald Reiterer, S. Streuber","doi":"10.1109/VR55154.2023.00084","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00084","url":null,"abstract":"Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123231652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HoloBeam: Paper-Thin Near-Eye Displays HoloBeam:薄如纸的近眼显示器
Pub Date : 2022-12-08 DOI: 10.1109/VR55154.2023.00073
K. Akşit, Yuta Itoh
An emerging alternative to conventional Augmented Reality (AR) glasses designs, Beaming displays promise slim AR glasses free from challenging design trade-offs, including battery-related limits or computational budget-related issues. These beaming displays remove active components such as batteries and electronics from AR glasses and move them to a projector that projects images to a user from a distance (1–2 meters), where users wear only passive optical eyepieces. However, earlier implementations of these displays delivered poor resolutions (7 cycles per degree) without any optical focus cues and were introduced with a bulky form-factor eyepiece ($sim 50 mm$ thick). This paper introduces a new milestone for beaming displays, which we call HoloBeam. In this new design, a custom holographic projector populates a micro-volume located at some distance (1–2 meters) with multiple planes of images. Users view magnified copies of these images from this small volume with the help of an eyepiece that is either a Holographic Optical Element (HOE) or a set of lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date with submillimeter thickness (e.g., HOE film is only $120 mu m$ thick). In addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles per degree) with a 70 degrees-wide field of view.
作为传统增强现实(AR)眼镜设计的一种新兴替代方案,Beaming显示器有望实现纤薄的AR眼镜,避免具有挑战性的设计权衡,包括电池相关限制或计算预算相关问题。这些光束显示器将电池和电子设备等有源组件从AR眼镜上移除,并将其移动到投影仪上,从距离(1-2米)向用户投射图像,用户只需佩戴无源光学目镜。然而,这些显示器的早期实现在没有任何光学聚焦提示的情况下提供了较差的分辨率(每度7个周期),并且引入了笨重的目镜($sim 50 mm$厚)。本文介绍了一种新的里程碑式的光束显示技术,我们称之为HoloBeam。在这个新设计中,一个定制的全息投影仪将多个图像平面放置在距离一定距离(1-2米)的微体上。用户通过一个目镜(全息光学元件(HOE)或一组透镜)从这个小体积中查看这些图像的放大副本。我们的HoloBeam原型展示了迄今为止最薄的AR眼镜,厚度为亚毫米(例如,HOE膜厚度仅为$120 mu m$)。此外,HoloBeam原型展示了接近视网膜的分辨率(每度24个周期),视野范围为70度。
{"title":"HoloBeam: Paper-Thin Near-Eye Displays","authors":"K. Akşit, Yuta Itoh","doi":"10.1109/VR55154.2023.00073","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00073","url":null,"abstract":"An emerging alternative to conventional Augmented Reality (AR) glasses designs, Beaming displays promise slim AR glasses free from challenging design trade-offs, including battery-related limits or computational budget-related issues. These beaming displays remove active components such as batteries and electronics from AR glasses and move them to a projector that projects images to a user from a distance (1–2 meters), where users wear only passive optical eyepieces. However, earlier implementations of these displays delivered poor resolutions (7 cycles per degree) without any optical focus cues and were introduced with a bulky form-factor eyepiece ($sim 50 mm$ thick). This paper introduces a new milestone for beaming displays, which we call HoloBeam. In this new design, a custom holographic projector populates a micro-volume located at some distance (1–2 meters) with multiple planes of images. Users view magnified copies of these images from this small volume with the help of an eyepiece that is either a Holographic Optical Element (HOE) or a set of lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date with submillimeter thickness (e.g., HOE film is only $120 mu m$ thick). In addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles per degree) with a 70 degrees-wide field of view.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131420109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Realistic Defocus Blur for Multiplane Computer-Generated Holography 逼真的离焦模糊多平面计算机生成全息
Pub Date : 2022-05-14 DOI: 10.1109/VR55154.2023.00057
Koray Kavaklı, Yuta Itoh, H. Urey, K. Akşit
This paper introduces a new multiplane CGH computation method to reconstruct artifact-free high-quality holograms with natural-looking defocus blur. Our method introduces a new targeting scheme and a new loss function. While the targeting scheme accounts for defocused parts of the scene at each depth plane, the new loss function analyzes focused and defocused parts separately in reconstructed images. Our method support phase-only CGH calculations using various iterative (e.g., Gerchberg-Saxton, Gradient Descent) and non-iterative (e.g., Double Phase) CGH techniques. We achieve our best image quality using a modified gradient descent-based optimization recipe where we introduce a constraint inspired by the double phase method. We validate our method experimentally using our proof-of-concept holographic display, comparing various algorithms, including multi-depth scenes with sparse and dense contents.
本文介绍了一种新的多平面CGH计算方法,用于重建具有自然散焦模糊的无伪影高质量全息图。我们的方法引入了一种新的目标格式和新的损失函数。虽然目标方案考虑了场景在每个深度平面上的散焦部分,但新的损失函数在重建图像中分别分析了聚焦和散焦部分。我们的方法支持使用各种迭代(例如,Gerchberg-Saxton,梯度下降)和非迭代(例如,双相)CGH技术的纯相位CGH计算。我们使用改进的基于梯度下降的优化配方来实现最佳图像质量,其中我们引入了受双相位方法启发的约束。我们通过实验验证了我们的方法,使用我们的概念验证全息显示器,比较了各种算法,包括具有稀疏和密集内容的多深度场景。
{"title":"Realistic Defocus Blur for Multiplane Computer-Generated Holography","authors":"Koray Kavaklı, Yuta Itoh, H. Urey, K. Akşit","doi":"10.1109/VR55154.2023.00057","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00057","url":null,"abstract":"This paper introduces a new multiplane CGH computation method to reconstruct artifact-free high-quality holograms with natural-looking defocus blur. Our method introduces a new targeting scheme and a new loss function. While the targeting scheme accounts for defocused parts of the scene at each depth plane, the new loss function analyzes focused and defocused parts separately in reconstructed images. Our method support phase-only CGH calculations using various iterative (e.g., Gerchberg-Saxton, Gradient Descent) and non-iterative (e.g., Double Phase) CGH techniques. We achieve our best image quality using a modified gradient descent-based optimization recipe where we introduce a constraint inspired by the double phase method. We validate our method experimentally using our proof-of-concept holographic display, comparing various algorithms, including multi-depth scenes with sparse and dense contents.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114238926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1