首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
A Compact Photochromic Occlusion Capable See-through Display with Holographic Lenses 一种具有全息透镜的紧凑型光致变色遮挡能力的透明显示器
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00039
Chun Wei Ooi, Yuichi Hiroi, Yuta Itoh
Occlusion is a crucial visual element in optical see-through (OST) augmented reality, however, implementing occlusion in OST displays while addressing various design trade-offs is a difficult problem. In contrast to the traditional method of using spatial light modulators (SLMs) for the occlusion mask, using photochromic materials as occlusion masks can effectively eliminate diffraction artifacts in see-through views due to the lack of electronic pixels, thus providing superior see-through image quality. However, this design requires UV illumination to activate the photochromic mate-rial, which traditionally requires multiple SLMs, resulting in a larger form factor for the system. This paper presents a compact photochromic occlusion-capable OST design using multilayer, wavelength-dependent holographic optical lenses (HOLs). Our approach employs a single digital mi-cromirror display (DMD) to form both the occlusion mask with UV light and a virtual image with visible light in a time-multiplexed man-ner. We demonstrate our proof-of-concept system on a bench-top setup and assess the appearance and contrasts of the displayed image. We also suggest potential improvements for current prototypes to encourage the community to explore this occlusion approach.
遮挡是光学透视(OST)增强现实中至关重要的视觉元素,然而,在解决各种设计权衡的同时,在OST显示器中实现遮挡是一个难题。与传统使用空间光调制器(spatial light modulator, slm)作为遮挡掩模的方法相比,使用光致变色材料作为遮挡掩模可以有效消除由于缺少电子像素而导致的透视视图中的衍射伪影,从而提供优越的透视图像质量。然而,这种设计需要激活光致变色mate-rial紫外光照,这通常需要多个SLMs,导致一个更大的系统的形成因素。本文提出了一种紧凑的光致变色光阻OST设计,采用多层波长相关全息光学透镜(HOLs)。我们的方法采用单个数字微镜显示器(DMD),以时间复用的方式形成紫外线遮挡和可见光虚拟图像。我们在桌面设置上演示了我们的概念验证系统,并评估了显示图像的外观和对比度。我们还建议对当前原型进行潜在的改进,以鼓励社区探索这种遮挡方法。
{"title":"A Compact Photochromic Occlusion Capable See-through Display with Holographic Lenses","authors":"Chun Wei Ooi, Yuichi Hiroi, Yuta Itoh","doi":"10.1109/VR55154.2023.00039","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00039","url":null,"abstract":"Occlusion is a crucial visual element in optical see-through (OST) augmented reality, however, implementing occlusion in OST displays while addressing various design trade-offs is a difficult problem. In contrast to the traditional method of using spatial light modulators (SLMs) for the occlusion mask, using photochromic materials as occlusion masks can effectively eliminate diffraction artifacts in see-through views due to the lack of electronic pixels, thus providing superior see-through image quality. However, this design requires UV illumination to activate the photochromic mate-rial, which traditionally requires multiple SLMs, resulting in a larger form factor for the system. This paper presents a compact photochromic occlusion-capable OST design using multilayer, wavelength-dependent holographic optical lenses (HOLs). Our approach employs a single digital mi-cromirror display (DMD) to form both the occlusion mask with UV light and a virtual image with visible light in a time-multiplexed man-ner. We demonstrate our proof-of-concept system on a bench-top setup and assess the appearance and contrasts of the displayed image. We also suggest potential improvements for current prototypes to encourage the community to explore this occlusion approach.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimizing Product Placement for Virtual Stores 优化虚拟商店的产品植入
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00049
Wei Liang, Luhui Wang, Xinzhe Yu, Changyang Li, Rawan Alghofaili, Yining Lang, L. Yu
The recent popularity of consumer-grade virtual reality devices has enabled users to experience immersive shopping in virtual environments. As in a real-world store, the placement of products in a virtual store should appeal to shoppers, which could be time-consuming, tedious, and non-trivial to create manually. Thus, this work introduces a novel approach for automatically optimizing product placement in virtual stores. Our approach considers product exposure and spatial constraints, applying an optimizer to search for optimal product placement solutions. We conducted qualitative scene rationality and quantitative product exposure experiments to validate our approach with users. The results show that the proposed approach can synthesize reasonable product placements and increase product exposures for different virtual stores.
最近流行的消费级虚拟现实设备使用户能够在虚拟环境中体验沉浸式购物。与在现实世界的商店中一样,虚拟商店中产品的位置应该吸引购物者,这可能是耗时、繁琐且不容易手动创建的。因此,这项工作引入了一种自动优化虚拟商店产品放置的新方法。我们的方法考虑了产品暴露和空间限制,应用优化器来搜索最佳的产品放置解决方案。我们进行了定性的场景合理性和定量的产品暴露实验,以验证我们的方法与用户。结果表明,该方法可以为不同的虚拟商店综合合理的产品放置位置,提高产品曝光率。
{"title":"Optimizing Product Placement for Virtual Stores","authors":"Wei Liang, Luhui Wang, Xinzhe Yu, Changyang Li, Rawan Alghofaili, Yining Lang, L. Yu","doi":"10.1109/VR55154.2023.00049","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00049","url":null,"abstract":"The recent popularity of consumer-grade virtual reality devices has enabled users to experience immersive shopping in virtual environments. As in a real-world store, the placement of products in a virtual store should appeal to shoppers, which could be time-consuming, tedious, and non-trivial to create manually. Thus, this work introduces a novel approach for automatically optimizing product placement in virtual stores. Our approach considers product exposure and spatial constraints, applying an optimizer to search for optimal product placement solutions. We conducted qualitative scene rationality and quantitative product exposure experiments to validate our approach with users. The results show that the proposed approach can synthesize reasonable product placements and increase product exposures for different virtual stores.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121120190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Enhancements towards Gaze Oriented Parallel Views in Immersive Tasks 在沉浸式任务中探索面向凝视的并行视图的增强
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00077
Theophilus Teo, Kuniharu Sakurada, M. Sugimoto
Parallel view is a technique that allows a VR user to see multiple locations at a time. It enables the user to control several remote or virtual body parts while seeing parallel views to solve synchronous tasks. However, these techniques only explored the benefits and drawbacks of a user performing different tasks. In this paper, we explored enhancements on a singular or asynchronous task by utilizing information obtained in parallel views. We developed three prototypes where parallel views are fixed, moving in symmetric order, or following the user's eye gaze. We conducted a user study to compare each prototype against traditional VR (without parallel views) in three types of tasks: object search and interaction tasks in a 1) simple environment and 2) complex environment, and 3) object distances estimation task. We found parallel views improved multi-embodiment while each technique helped different tasks. No parallel view provided a clean interface, thus improving spatial presence, mental effort, and user performance. However, participants' feedback highlighted potential usefulness and a lower physical effort by using parallel views to solve complicated tasks.
并行视图是一种允许VR用户同时看到多个位置的技术。它使用户能够在看到并行视图的同时控制多个远程或虚拟身体部位,以解决同步任务。然而,这些技术只探讨了用户执行不同任务的优点和缺点。在本文中,我们通过利用在并行视图中获得的信息来探索对单个或异步任务的增强。我们开发了三个原型,其中平行视图是固定的,以对称顺序移动,或者跟随用户的眼睛注视。我们进行了一项用户研究,将每个原型与传统VR(没有并行视图)在三种类型的任务中进行比较:1)简单环境和2)复杂环境中的对象搜索和交互任务,以及3)对象距离估计任务。我们发现并行视图改进了多体现,而每种技术有助于不同的任务。没有并行视图提供了一个干净的界面,从而改善了空间存在、脑力劳动和用户性能。然而,参与者的反馈强调了使用并行视图解决复杂任务的潜在有用性和较低的体力消耗。
{"title":"Exploring Enhancements towards Gaze Oriented Parallel Views in Immersive Tasks","authors":"Theophilus Teo, Kuniharu Sakurada, M. Sugimoto","doi":"10.1109/VR55154.2023.00077","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00077","url":null,"abstract":"Parallel view is a technique that allows a VR user to see multiple locations at a time. It enables the user to control several remote or virtual body parts while seeing parallel views to solve synchronous tasks. However, these techniques only explored the benefits and drawbacks of a user performing different tasks. In this paper, we explored enhancements on a singular or asynchronous task by utilizing information obtained in parallel views. We developed three prototypes where parallel views are fixed, moving in symmetric order, or following the user's eye gaze. We conducted a user study to compare each prototype against traditional VR (without parallel views) in three types of tasks: object search and interaction tasks in a 1) simple environment and 2) complex environment, and 3) object distances estimation task. We found parallel views improved multi-embodiment while each technique helped different tasks. No parallel view provided a clean interface, thus improving spatial presence, mental effort, and user performance. However, participants' feedback highlighted potential usefulness and a lower physical effort by using parallel views to solve complicated tasks.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132164128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Haptic Stimulation-Based Training Method to Improve the Quality of Motor Imagery EEG Signal in VR 一种基于触觉刺激的训练方法提高VR中运动图像脑电信号的质量
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00074
Shiwei Cheng, Jieming Tian
With the emergence of brain-computer interface (BCI) technology and virtual reality (VR), how to improve the quality of motor imagery (MI) electroencephalogram (EEG) signal has become a key issue for MI BCI applications under VR. In this paper, we proposed to enhance the quality of MI EEG signal by using haptic stimulation training. We designed a first-person perspective and a third-person perspective scene under VR, and the experimental results showed that the left- and right-hand MI EEG quality of the participants improved significantly compared with that before training, and the mean differentiation of the left- and right-hand MI tasks was improved by 21.8% and 15.7%, respectively. We implemented a BCI application system in VR and developed a game based on MI EEG for control of ball movement, in which the average classification accuracy by the participants after training in the first-person perspective reached 93.5%, which was a significant improvement over existing study.
随着脑机接口(BCI)技术和虚拟现实(VR)技术的出现,如何提高运动图像(MI)脑电图(EEG)信号的质量成为VR下MI脑机接口应用的关键问题。本文提出通过触觉刺激训练来提高心梗脑电信号的质量。我们设计了VR下的第一人称视角和第三人称视角场景,实验结果表明,与训练前相比,参与者的左、右手MI脑电质量得到了显著提高,左、右手MI任务的平均分化能力分别提高了21.8%和15.7%。我们在VR中实现了BCI应用系统,并开发了一款基于MI EEG的球运动控制游戏,参与者在第一人称视角下训练后的平均分类准确率达到93.5%,较已有研究有明显提高。
{"title":"A Haptic Stimulation-Based Training Method to Improve the Quality of Motor Imagery EEG Signal in VR","authors":"Shiwei Cheng, Jieming Tian","doi":"10.1109/VR55154.2023.00074","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00074","url":null,"abstract":"With the emergence of brain-computer interface (BCI) technology and virtual reality (VR), how to improve the quality of motor imagery (MI) electroencephalogram (EEG) signal has become a key issue for MI BCI applications under VR. In this paper, we proposed to enhance the quality of MI EEG signal by using haptic stimulation training. We designed a first-person perspective and a third-person perspective scene under VR, and the experimental results showed that the left- and right-hand MI EEG quality of the participants improved significantly compared with that before training, and the mean differentiation of the left- and right-hand MI tasks was improved by 21.8% and 15.7%, respectively. We implemented a BCI application system in VR and developed a game based on MI EEG for control of ball movement, in which the average classification accuracy by the participants after training in the first-person perspective reached 93.5%, which was a significant improvement over existing study.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131597637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring 3D Interaction with Gaze Guidance in Augmented Reality 探索增强现实中与凝视引导的3D交互
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00018
Yiwei Bao, Jiaxi Wang, Zhimin Wang, Feng Lu
Recent research based on hand-eye coordination has shown that gaze could improve object selection and translation experience under certain scenarios in AR. However, several limitations still exist. Specifically, we investigate whether gaze could help object selection with heavy 3D occlusions and help 3D object translation in the depth dimension. In addition, we also investigate the possibility of reducing the gaze calibration burden before use. Therefore, we develop new methods with proper gaze guidance for 3D interaction in AR, and also an implicit online calibration method. We conduct two user studies to evaluate different interaction methods and the results show that our methods not only improve the effectiveness of occluded objects selection but also alleviate the arm fatigue problem significantly in the depth translation task. We also evaluate the proposed implicit online calibration method and find its accuracy comparable to standard 9 points explicit calibration, which makes a step towards practical use in the real world.
最近基于手眼协调的研究表明,凝视可以改善AR中某些场景下的对象选择和翻译体验。然而,仍然存在一些局限性。具体来说,我们研究了凝视是否可以帮助重三维遮挡下的物体选择和在深度维度上帮助3D物体平移。此外,我们还探讨了减少使用前凝视校准负担的可能性。因此,我们开发了一种新的方法,为AR中的3D交互提供合适的凝视引导,以及一种隐式在线校准方法。我们通过两个用户研究来评估不同的交互方法,结果表明我们的方法不仅提高了遮挡目标选择的有效性,而且显著缓解了深度翻译任务中的手臂疲劳问题。我们还对所提出的隐式在线校准方法进行了评估,发现其精度可与标准9点显式校准相媲美,从而向实际应用迈出了一步。
{"title":"Exploring 3D Interaction with Gaze Guidance in Augmented Reality","authors":"Yiwei Bao, Jiaxi Wang, Zhimin Wang, Feng Lu","doi":"10.1109/VR55154.2023.00018","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00018","url":null,"abstract":"Recent research based on hand-eye coordination has shown that gaze could improve object selection and translation experience under certain scenarios in AR. However, several limitations still exist. Specifically, we investigate whether gaze could help object selection with heavy 3D occlusions and help 3D object translation in the depth dimension. In addition, we also investigate the possibility of reducing the gaze calibration burden before use. Therefore, we develop new methods with proper gaze guidance for 3D interaction in AR, and also an implicit online calibration method. We conduct two user studies to evaluate different interaction methods and the results show that our methods not only improve the effectiveness of occluded objects selection but also alleviate the arm fatigue problem significantly in the depth translation task. We also evaluate the proposed implicit online calibration method and find its accuracy comparable to standard 9 points explicit calibration, which makes a step towards practical use in the real world.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133410941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
VGTC Virtual Reality Academy Award VGTC虚拟现实奥斯卡奖
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00094
{"title":"VGTC Virtual Reality Academy Award","authors":"","doi":"10.1109/vr55154.2023.00094","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00094","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127094468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Intuitive Acquisition of Occluded VR Objects Through an Interactive Disocclusion Mini-map 通过交互式消光小地图直观地获取被遮挡的VR物体
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00061
Mykola Maslych, Yahya Hmaiti, Ryan Ghamandi, Paige Leber, Ravi Kiran Kattoju, Jacob Belga, J. Laviola
Standard selection techniques such as ray casting fail when virtual objects are partially or fully occluded. In this paper, we present two novel approaches that combine cone-casting, world-in-miniature, and grasping metaphors to disocclude objects in the representation local to the user. Through a within-subject study where we compared 4 selection techniques across 3 levels of object occlusion, we found that our techniques outperformed an alternative one that also focuses on maintaining the spatial relationships between objects. We discuss application scenarios and future research directions for these types of selection techniques.
当虚拟物体部分或完全被遮挡时,标准的选择技术(如光线投射)就会失效。在本文中,我们提出了两种新颖的方法,结合了锥形铸造,微缩世界和抓取隐喻来消除用户本地表示中的对象。通过一项主题内研究,我们比较了3个对象遮挡级别的4种选择技术,我们发现我们的技术优于另一种技术,这种技术也专注于保持对象之间的空间关系。讨论了这些类型的选择技术的应用场景和未来的研究方向。
{"title":"Toward Intuitive Acquisition of Occluded VR Objects Through an Interactive Disocclusion Mini-map","authors":"Mykola Maslych, Yahya Hmaiti, Ryan Ghamandi, Paige Leber, Ravi Kiran Kattoju, Jacob Belga, J. Laviola","doi":"10.1109/VR55154.2023.00061","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00061","url":null,"abstract":"Standard selection techniques such as ray casting fail when virtual objects are partially or fully occluded. In this paper, we present two novel approaches that combine cone-casting, world-in-miniature, and grasping metaphors to disocclude objects in the representation local to the user. Through a within-subject study where we compared 4 selection techniques across 3 levels of object occlusion, we found that our techniques outperformed an alternative one that also focuses on maintaining the spatial relationships between objects. We discuss application scenarios and future research directions for these types of selection techniques.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132631619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Power, Performance, and Image Quality Tradeoffs in Foveated Rendering 注视点渲染中的功率、性能和图像质量权衡
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00036
Rahul Singh, Muhammad Huzaifa, Jeffrey Liu, Anjul Patney, Hashim Sharif, Yifan Zhao, S. Adve
Extended reality (XR) devices, including augmented, virtual, and mixed reality, provide a deeply immersive experience. However, practical limitations like weight, heat, and comfort put extreme constraints on the performance, power consumption, and image quality of such systems. In this paper, we study how these constraints form the tradeoff between Fixed Foveated Rendering (FFR), Gaze-Tracked Foveated Rendering (TFR), and conventional, non-foveated rendering. While existing papers have often studied these methods, we provide the first comprehensive study of their relative feasibility in practical systems with limited battery life and computational budget. We show that TFR with the added cost of the gaze-tracker can often be more expensive than FFR. Thus, we co-design a gaze-tracked foveated renderer considering its benefits in computation, power efficiency, and tradeoffs in image quality. We describe principled approximations for eye tracking which provide up to a 9x speedup in runtime performance with approximately a 20x improvement in energy efficiency when run on a mobile GPU. In isolation, these approximations appear to significantly degrade the gaze quality, but appropriate compensation in the visual pipeline can mitigate the loss. Overall, we show that with a highly optimized gaze-tracker, TFR is feasible compared to FFR, resulting in up to 1.25x faster frame times while also reducing total energy consumption by over 40%.
扩展现实(XR)设备,包括增强现实、虚拟现实和混合现实,提供深度沉浸式体验。然而,诸如重量、热量和舒适度等实际限制对此类系统的性能、功耗和图像质量造成了极大的限制。在本文中,我们研究了这些约束如何形成固定注视点渲染(FFR),注视跟踪注视点渲染(TFR)和传统的非注视点渲染之间的权衡。虽然现有的论文经常研究这些方法,但我们首次全面研究了它们在电池寿命和计算预算有限的实际系统中的相对可行性。我们表明,加上注视跟踪器的额外成本,TFR通常比FFR更昂贵。因此,我们共同设计了一个注视跟踪的注视点渲染器,考虑到它在计算,功率效率和图像质量权衡方面的优势。我们描述了眼动追踪的原则近似,当在移动GPU上运行时,它提供了高达9倍的运行时性能加速,大约20倍的能效改进。单独来看,这些近似会显著降低凝视质量,但在视觉管道中进行适当的补偿可以减轻这种损失。总体而言,我们表明,与FFR相比,使用高度优化的注视跟踪器,TFR是可行的,可以将帧时间提高1.25倍,同时还可以将总能耗降低40%以上。
{"title":"Power, Performance, and Image Quality Tradeoffs in Foveated Rendering","authors":"Rahul Singh, Muhammad Huzaifa, Jeffrey Liu, Anjul Patney, Hashim Sharif, Yifan Zhao, S. Adve","doi":"10.1109/VR55154.2023.00036","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00036","url":null,"abstract":"Extended reality (XR) devices, including augmented, virtual, and mixed reality, provide a deeply immersive experience. However, practical limitations like weight, heat, and comfort put extreme constraints on the performance, power consumption, and image quality of such systems. In this paper, we study how these constraints form the tradeoff between Fixed Foveated Rendering (FFR), Gaze-Tracked Foveated Rendering (TFR), and conventional, non-foveated rendering. While existing papers have often studied these methods, we provide the first comprehensive study of their relative feasibility in practical systems with limited battery life and computational budget. We show that TFR with the added cost of the gaze-tracker can often be more expensive than FFR. Thus, we co-design a gaze-tracked foveated renderer considering its benefits in computation, power efficiency, and tradeoffs in image quality. We describe principled approximations for eye tracking which provide up to a 9x speedup in runtime performance with approximately a 20x improvement in energy efficiency when run on a mobile GPU. In isolation, these approximations appear to significantly degrade the gaze quality, but appropriate compensation in the visual pipeline can mitigate the loss. Overall, we show that with a highly optimized gaze-tracker, TFR is feasible compared to FFR, resulting in up to 1.25x faster frame times while also reducing total energy consumption by over 40%.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115392122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Locomotion-aware Foveated Rendering 运动感知注视点渲染
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00062
Xuehuai Shi, Lili Wang, Jian Wu, Wei Ke, C. Lam
Optimizing rendering performance improves the user's immersion in virtual scene exploration. Foveated rendering uses the features of the human visual system (HVS) to improve rendering performance without sacrificing perceptual visual quality. We collect and analyze the viewing motion of different locomotion methods, and describe the effects of these viewing motions on HVS's sensitivity, as well as the advantages of these effects that may bring to foveated rendering. Then we propose the locomotion-aware foveated rendering method (LaFR) to further accelerate foveated rendering by leveraging the advantages. In LaFR, we first introduce the framework of LaFR. Secondly, we propose an eccentricity-based shading rate controller that provides the shading rate control of the given region in foveated rendering. Thirdly, we propose a locomotion-aware log-polar mapping method, which controls the foveal average shading rate, the peripheral shading rate decrease speed, and the overall shading quantity with the locomotion-aware coefficients based on the eccentricity-based shading rate controller. LaFR achieves similar perceptual visual quality as the conventional foveated rendering while achieving up to 1.6× speedup. Compared with the full resolution rendering, LaFR achieves up to 3.8× speedup.
优化渲染性能可以提高用户在虚拟场景探索中的沉浸感。注视点渲染利用人类视觉系统(HVS)的特征,在不牺牲感知视觉质量的情况下提高渲染性能。我们收集并分析了不同运动方法的观看运动,描述了这些观看运动对HVS灵敏度的影响,以及这些效果可能给注视点渲染带来的优势。然后,我们提出了运动感知的注视点绘制方法(LaFR),利用这些优势进一步加速注视点绘制。在LaFR中,我们首先介绍了LaFR的框架。其次,我们提出了一个基于偏心的着色率控制器,该控制器提供了在注视点渲染中给定区域的着色率控制。第三,提出了一种运动感知的对数极映射方法,该方法基于基于偏心率的遮蔽率控制器,利用运动感知系数控制中央凹平均遮蔽率、周边遮蔽率下降速度和整体遮蔽量。LaFR实现了与传统注视点渲染相似的感知视觉质量,同时实现了高达1.6倍的加速。与全分辨率渲染相比,LaFR实现了高达3.8倍的加速。
{"title":"Locomotion-aware Foveated Rendering","authors":"Xuehuai Shi, Lili Wang, Jian Wu, Wei Ke, C. Lam","doi":"10.1109/VR55154.2023.00062","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00062","url":null,"abstract":"Optimizing rendering performance improves the user's immersion in virtual scene exploration. Foveated rendering uses the features of the human visual system (HVS) to improve rendering performance without sacrificing perceptual visual quality. We collect and analyze the viewing motion of different locomotion methods, and describe the effects of these viewing motions on HVS's sensitivity, as well as the advantages of these effects that may bring to foveated rendering. Then we propose the locomotion-aware foveated rendering method (LaFR) to further accelerate foveated rendering by leveraging the advantages. In LaFR, we first introduce the framework of LaFR. Secondly, we propose an eccentricity-based shading rate controller that provides the shading rate control of the given region in foveated rendering. Thirdly, we propose a locomotion-aware log-polar mapping method, which controls the foveal average shading rate, the peripheral shading rate decrease speed, and the overall shading quantity with the locomotion-aware coefficients based on the eccentricity-based shading rate controller. LaFR achieves similar perceptual visual quality as the conventional foveated rendering while achieving up to 1.6× speedup. Compared with the full resolution rendering, LaFR achieves up to 3.8× speedup.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CaV3: Cache-assisted Viewport Adaptive Volumetric Video Streaming CaV3:缓存辅助视口自适应容量视频流
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00033
Junhua Liu, Boxiang Zhu, Fang Wang, Yili Jin, Wenyi Zhang, Zihan Xu, Shuguang Cui
Volumetric video (VV) recently emerges as a new form of video application providing a photorealistic immersive 3D viewing experience with 6 degree-of-freedom (DoF), which empowers many applications such as VR, AR, and Metaverse. A key problem therein is how to stream the enormous size VV through the network with limited bandwidth. Existing works mostly focused on predicting the viewport for a tiling-based adaptive VV streaming, which however only has quite a limited effect on resource saving. We argue that the content repeatability in the viewport can be further leveraged, and for the first time, propose a client-side cache-assisted strategy that aims to buffer the repeatedly appearing VV tiles in the near future so as to reduce the redundant VV content transmission. The key challenges exist in three aspects, including (1) feature extraction and mining in 6 DoF VV context, (2) accurate long-term viewing pattern estimation and (3) optimal caching scheduling with limited capacity. In this paper, we propose CaV3, an integrated cache-assisted viewport adaptive VV streaming framework to address the challenges. CaV3 employs a Long-short term Sequential prediction model (LSTSP) that achieves accurate short-term, mid-term and long-term viewing pattern prediction with a multi-modal fusion model by capturing the viewer's behavior inertia, current attention, and subjective intention. Besides, CaV3 also contains a contextual MAB-based caching adaptation algorithm (CCA) to fully utilize the viewing pattern and solve the optimal caching problem with a proved upper bound regret. Compared to existing VV datasets only containing single or co-located objects, we for the first time collect a comprehensive dataset with sufficient practical unbounded 360° scenes. The extensive evaluation of the dataset confirms the superiority of CaV3, which outperforms the SOTA algorithm by 15.6%-43% in viewport prediction and 13%-40% in system utility.
体积视频(VV)最近作为一种新的视频应用形式出现,它提供了具有6个自由度(DoF)的逼真的沉浸式3D观看体验,为VR, AR和虚拟世界等许多应用提供了支持。其中的一个关键问题是如何在有限的网络带宽下传输海量视频。现有的工作主要集中在预测基于平铺的自适应VV流的视口,然而,这对节省资源的影响非常有限。我们认为可以进一步利用视口中的内容可重复性,并首次提出了一种客户端缓存辅助策略,旨在缓冲在不久的将来重复出现的VV tile,以减少冗余的VV内容传输。关键的挑战存在于三个方面,包括(1)6 DoF VV环境下的特征提取和挖掘,(2)准确的长期观看模式估计,(3)有限容量下的优化缓存调度。在本文中,我们提出了CaV3,一个集成的缓存辅助视口自适应VV流框架来解决这些挑战。CaV3采用长短期序列预测模型(Long-short -term Sequential prediction model, LSTSP),通过捕捉观看者的行为惯性、当前注意力和主观意图,通过多模态融合模型实现对短、中、长期观看模式的准确预测。此外,CaV3还包含了基于上下文mab的缓存自适应算法(CCA),以充分利用查看模式,解决具有已证明的上界遗憾的最优缓存问题。与现有仅包含单个或共定位对象的VV数据集相比,我们首次收集了具有足够实用的无界360°场景的综合数据集。对数据集的广泛评估证实了CaV3的优越性,它在视口预测方面比SOTA算法高出15.6%-43%,在系统效用方面比SOTA算法高出13%-40%。
{"title":"CaV3: Cache-assisted Viewport Adaptive Volumetric Video Streaming","authors":"Junhua Liu, Boxiang Zhu, Fang Wang, Yili Jin, Wenyi Zhang, Zihan Xu, Shuguang Cui","doi":"10.1109/VR55154.2023.00033","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00033","url":null,"abstract":"Volumetric video (VV) recently emerges as a new form of video application providing a photorealistic immersive 3D viewing experience with 6 degree-of-freedom (DoF), which empowers many applications such as VR, AR, and Metaverse. A key problem therein is how to stream the enormous size VV through the network with limited bandwidth. Existing works mostly focused on predicting the viewport for a tiling-based adaptive VV streaming, which however only has quite a limited effect on resource saving. We argue that the content repeatability in the viewport can be further leveraged, and for the first time, propose a client-side cache-assisted strategy that aims to buffer the repeatedly appearing VV tiles in the near future so as to reduce the redundant VV content transmission. The key challenges exist in three aspects, including (1) feature extraction and mining in 6 DoF VV context, (2) accurate long-term viewing pattern estimation and (3) optimal caching scheduling with limited capacity. In this paper, we propose CaV3, an integrated cache-assisted viewport adaptive VV streaming framework to address the challenges. CaV3 employs a Long-short term Sequential prediction model (LSTSP) that achieves accurate short-term, mid-term and long-term viewing pattern prediction with a multi-modal fusion model by capturing the viewer's behavior inertia, current attention, and subjective intention. Besides, CaV3 also contains a contextual MAB-based caching adaptation algorithm (CCA) to fully utilize the viewing pattern and solve the optimal caching problem with a proved upper bound regret. Compared to existing VV datasets only containing single or co-located objects, we for the first time collect a comprehensive dataset with sufficient practical unbounded 360° scenes. The extensive evaluation of the dataset confirms the superiority of CaV3, which outperforms the SOTA algorithm by 15.6%-43% in viewport prediction and 13%-40% in system utility.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114730817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1