首页 > 最新文献

2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)最新文献

英文 中文
Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation 基于注视的视觉认知估计的迭代视觉搜索任务有效辅助时机初步分析
Syunsuke Yoshida, Makoto Sei, A. Utsumi, H. Yamazoe
In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, we first estimate the participant's VC of the target in the previous task. We then determine the target for the next task based on the VC and start to guide the participant's attention to the target for the next task at the VC timing. By initiating the guidance from the timing of the previous target's VC, we can guide attention at an earlier time and achieve efficient attention guidance. The preliminary experimental results showed that VC-based assistance improves task performance.
本文针对迭代视觉搜索任务中是否视觉识别目标(视觉认知,VC)的问题,提出了一种基于VC的高效辅助方法。在本文提出的方法中,我们首先估计参与者在前一任务中对目标的VC。然后,我们根据VC确定下一个任务的目标,并开始在VC时间引导参与者注意下一个任务的目标。通过从前一个目标VC的时间点开始引导,可以更早地引导注意,实现高效的注意引导。初步实验结果表明,基于vc的辅助可以提高任务性能。
{"title":"Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation","authors":"Syunsuke Yoshida, Makoto Sei, A. Utsumi, H. Yamazoe","doi":"10.1109/VRW55335.2022.00179","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00179","url":null,"abstract":"In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, we first estimate the participant's VC of the target in the previous task. We then determine the target for the next task based on the VC and start to guide the participant's attention to the target for the next task at the VC timing. By initiating the guidance from the timing of the previous target's VC, we can guide attention at an earlier time and achieve efficient attention guidance. The preliminary experimental results showed that VC-based assistance improves task performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134286306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR Training: The Unused Opportunity to Save Lives During a Pandemic VR培训:大流行期间未使用的拯救生命的机会
Maximilian Rettinger, G. Rigoll, C. Schmaderer
When on life support, the patients' lives not only depend on the availability of the medical devices but also on the staff's expertise to use them. With the example of ECMO devices, which were highly demanded during the COVID-19 pandemic but rarely used until then, we developed a VR training for priming an ECMO to provide the required expertise in a standardized and simple way on a global scale. This paper presents the development of VR training with feedback from medical and technical experts.
在使用生命支持时,患者的生命不仅取决于医疗设备的可用性,还取决于工作人员使用这些设备的专业知识。以ECMO设备为例,在COVID-19大流行期间,ECMO设备需求量很大,但在此之前很少使用,我们开发了一种VR培训,用于启动ECMO,以标准化和简单的方式在全球范围内提供所需的专业知识。本文介绍了VR培训的发展以及医学和技术专家的反馈。
{"title":"VR Training: The Unused Opportunity to Save Lives During a Pandemic","authors":"Maximilian Rettinger, G. Rigoll, C. Schmaderer","doi":"10.1109/VRW55335.2022.00092","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00092","url":null,"abstract":"When on life support, the patients' lives not only depend on the availability of the medical devices but also on the staff's expertise to use them. With the example of ECMO devices, which were highly demanded during the COVID-19 pandemic but rarely used until then, we developed a VR training for priming an ECMO to provide the required expertise in a standardized and simple way on a global scale. This paper presents the development of VR training with feedback from medical and technical experts.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131651042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FUSEDAR: Adaptive Environment Lighting Reconstruction for Visually Coherent Mobile AR Rendering FUSEDAR:用于视觉连贯移动AR渲染的自适应环境照明重建
Yiqin Zhao, Tian Guo
Obtaining accurate omnidirectional environment lighting for high quality rendering in mobile augmented reality is challenging due to the practical limitation of mobile devices and the inherent spatial variance of lighting. In this paper, we present a novel adaptive environment lighting reconstruction method called FusedAR, which is designed from the outset to consider mobile characteristics, e.g., by exploiting mobile user natural behaviors of pointing the camera sensors perpendicular to the observation-rendering direction. Our initial evaluation shows that FusedAR achieves better rendering effects compared to using a recent deep learning-based AR lighting estimation system [8] and environment lighting captured by 360° cameras.
由于移动设备的实际限制和照明固有的空间差异,在移动增强现实中获得准确的全方位环境照明以实现高质量的渲染是一项挑战。在本文中,我们提出了一种新的自适应环境照明重建方法,称为FusedAR,该方法从一开始就考虑了移动特性,例如,通过利用移动用户的自然行为,将相机传感器指向垂直于观测渲染方向。我们的初步评估表明,与使用最近基于深度学习的AR照明估计系统[8]和360°摄像机捕获的环境照明相比,FusedAR实现了更好的渲染效果。
{"title":"FUSEDAR: Adaptive Environment Lighting Reconstruction for Visually Coherent Mobile AR Rendering","authors":"Yiqin Zhao, Tian Guo","doi":"10.1109/VRW55335.2022.00137","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00137","url":null,"abstract":"Obtaining accurate omnidirectional environment lighting for high quality rendering in mobile augmented reality is challenging due to the practical limitation of mobile devices and the inherent spatial variance of lighting. In this paper, we present a novel adaptive environment lighting reconstruction method called FusedAR, which is designed from the outset to consider mobile characteristics, e.g., by exploiting mobile user natural behaviors of pointing the camera sensors perpendicular to the observation-rendering direction. Our initial evaluation shows that FusedAR achieves better rendering effects compared to using a recent deep learning-based AR lighting estimation system [8] and environment lighting captured by 360° cameras.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132934444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on Virtual Keyboards with Probabilistic Touch Modeling 从2D到3D:促进单指空中打字与概率触摸建模虚拟键盘
Xin Yi, Chen Liang, Haozhan Chen, Jiuxu Song, Chun Yu, Yuanchun Shi
Mid-air text entry on virtual keyboards suffers from the lack of tactile feedback, bringing challenges to both tap detection and input prediction. In this poster, we demonstrated the feasibility of efficient single-finger typing in mid-air through probabilistic touch modeling. We first collected users' typing data on different sizes of virtual keyboards. Based on analyzing the data, we derived an input prediction algorithm that incorporated probabilistic touch detection and elastic probabilistic decoding. In the evaluation study where the participants performed real text entry tasks with this technique, they reached a pick-up single-finger typing speed of 24.0 WPM with 2.8% word-level error rate.
在虚拟键盘上的空中文本输入缺乏触觉反馈,给轻击检测和输入预测带来了挑战。在这张海报中,我们通过概率触摸建模演示了在半空中高效单指打字的可行性。我们首先收集用户在不同尺寸的虚拟键盘上的打字数据。在分析数据的基础上,提出了一种结合概率触摸检测和弹性概率解码的输入预测算法。在评估研究中,参与者使用这种技术执行真实的文本输入任务,他们的单指输入速度达到24.0 WPM,单词级错误率为2.8%。
{"title":"From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on Virtual Keyboards with Probabilistic Touch Modeling","authors":"Xin Yi, Chen Liang, Haozhan Chen, Jiuxu Song, Chun Yu, Yuanchun Shi","doi":"10.1109/VRW55335.2022.00198","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00198","url":null,"abstract":"Mid-air text entry on virtual keyboards suffers from the lack of tactile feedback, bringing challenges to both tap detection and input prediction. In this poster, we demonstrated the feasibility of efficient single-finger typing in mid-air through probabilistic touch modeling. We first collected users' typing data on different sizes of virtual keyboards. Based on analyzing the data, we derived an input prediction algorithm that incorporated probabilistic touch detection and elastic probabilistic decoding. In the evaluation study where the participants performed real text entry tasks with this technique, they reached a pick-up single-finger typing speed of 24.0 WPM with 2.8% word-level error rate.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[DC] Designing and Optimizing Daily-wear Photophobic Smart Sunglasses [DC]避光智能太阳镜的设计与优化
Xiaodan Hu
Photophobia, also known as light sensitivity, is a condition in which there is a fear of light. Traditional sunglasses and tinted glasses typically worn by individuals with photophobia only provide linear dimming, leading to difficulty to see the contents in the dark region of a high-contrast environment (e.g., indoors at night). This paper presents a smart dimming sunglass that uses a spatial light modular (SLM) to flexibly dim the user's field of view based on scene detection from a high dynamic range (HDR) camera. To address the problem that the occlusion mask displayed on the SLM becomes blurred due to out-of-focus and thus cannot provide a sufficient modulation when viewing a distant object, I design an optimization model to dilate the occlusion mask appropriately. The optimized dimming effect is verified by the camera and preliminary test by real users to be able to filter the desired amount of incoming light through a blurred mask.
畏光症,也被称为光敏感症,是一种害怕光的情况。传统的太阳镜和有色眼镜通常由畏光患者佩戴,只能提供线性调光,导致在高对比度环境(例如夜间室内)的黑暗区域难以看到内容。本文提出了一种基于高动态范围(HDR)相机的场景检测,利用空间光模块(SLM)灵活调暗用户视野的智能调光太阳镜。为了解决SLM上显示的遮挡蒙版因失焦而变得模糊,从而在观看远处物体时不能提供足够的调制的问题,我设计了一个优化模型来适当地扩大遮挡蒙版。优化的调光效果通过相机和真实用户的初步测试进行验证,能够通过模糊掩模过滤所需数量的入射光。
{"title":"[DC] Designing and Optimizing Daily-wear Photophobic Smart Sunglasses","authors":"Xiaodan Hu","doi":"10.1109/VRW55335.2022.00318","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00318","url":null,"abstract":"Photophobia, also known as light sensitivity, is a condition in which there is a fear of light. Traditional sunglasses and tinted glasses typically worn by individuals with photophobia only provide linear dimming, leading to difficulty to see the contents in the dark region of a high-contrast environment (e.g., indoors at night). This paper presents a smart dimming sunglass that uses a spatial light modular (SLM) to flexibly dim the user's field of view based on scene detection from a high dynamic range (HDR) camera. To address the problem that the occlusion mask displayed on the SLM becomes blurred due to out-of-focus and thus cannot provide a sufficient modulation when viewing a distant object, I design an optimization model to dilate the occlusion mask appropriately. The optimized dimming effect is verified by the camera and preliminary test by real users to be able to filter the desired amount of incoming light through a blurred mask.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131201131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Movement Augmentation in Virtual Reality: Impact on Sense of Agency Measured by Subjective Responses and Electroencephalography 虚拟现实中的运动增强:对主观反应和脑电图测量的代理感的影响
Liu Wang, Mengjie Huang, Chengxuan Qin, Yiqi Wang, Rui Yang
Virtual movement augmentation, which refers to the visual amplification of remapped movement, shows potential to be applied in motion-related virtual reality programs. Sense of agency (SoA), which measures the user's feeling of control in their action, has not been fully investigated for augmented movement. This study investigated the effect of augmented movement at three different levels (baseline, medium, and high) on users' SoA using both subjective responses and electroencephalography (EEG). Results show that SoA can be boosted slightly at medium augmentation level but drops at high level. The augmented virtual movement only helps to enhance SoA to a certain extent.
虚拟运动增强是指对重新映射的运动进行视觉放大,在与运动相关的虚拟现实程序中显示出应用潜力。代理感(SoA)衡量的是用户在行动中的控制感,但在增强运动中还没有得到充分的研究。本研究使用主观反应和脑电图(EEG)调查了三个不同水平(基线、中等和高)的增强运动对用户SoA的影响。结果表明,SoA在中等增强水平下略有提高,但在高增强水平下下降。增强的虚拟运动只在一定程度上有助于增强SoA。
{"title":"Movement Augmentation in Virtual Reality: Impact on Sense of Agency Measured by Subjective Responses and Electroencephalography","authors":"Liu Wang, Mengjie Huang, Chengxuan Qin, Yiqi Wang, Rui Yang","doi":"10.1109/VRW55335.2022.00267","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00267","url":null,"abstract":"Virtual movement augmentation, which refers to the visual amplification of remapped movement, shows potential to be applied in motion-related virtual reality programs. Sense of agency (SoA), which measures the user's feeling of control in their action, has not been fully investigated for augmented movement. This study investigated the effect of augmented movement at three different levels (baseline, medium, and high) on users' SoA using both subjective responses and electroencephalography (EEG). Results show that SoA can be boosted slightly at medium augmentation level but drops at high level. The augmented virtual movement only helps to enhance SoA to a certain extent.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131259851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Designing VR training systems for children with attention deficit hyperactivity disorder (ADHD) 儿童注意缺陷多动障碍(ADHD)虚拟现实训练系统设计
Ho-Yan Kwan, Lang Lin, Conor Fahy, J. Shell, Shiqi Pang, Yongkang Xing
Attention-deficit hyperactivity disorder (ADHD) is a common mental disorder in childhood, with a reported 5% global prevalence rate. The project uses Virtual Reality (VR) technology to help children improve their concentration in order to mitigate some of the various deficiencies in existing rehabilitation methods. The research aims to apply the interactive features of VR technologies and to combine them with psychological rehabilitation training technology. The research also uses Electroencephalography (EEG) brain electricity image technology for real-time information feedback. The mobile application can receive the EEG data with visualization to assist medical staff and patients' families in evaluating the treatment. The research designs a therapy training system without physical space restriction. It is easy to deploy and can be a highly customizable rehabilitation process.
注意缺陷多动障碍(ADHD)是一种常见的儿童精神障碍,据报道全球患病率为5%。该项目使用虚拟现实(VR)技术帮助儿童提高注意力,以减轻现有康复方法中的一些缺陷。本研究旨在应用VR技术的交互特性,并将其与心理康复训练技术相结合。本研究还利用脑电图(EEG)脑电图像技术进行实时信息反馈。移动应用程序可以接收脑电图数据,并将其可视化,以帮助医务人员和患者家属评估治疗情况。本研究设计了一个不受物理空间限制的治疗训练系统。它易于部署,可以是一个高度可定制的修复过程。
{"title":"Designing VR training systems for children with attention deficit hyperactivity disorder (ADHD)","authors":"Ho-Yan Kwan, Lang Lin, Conor Fahy, J. Shell, Shiqi Pang, Yongkang Xing","doi":"10.1109/VRW55335.2022.00030","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00030","url":null,"abstract":"Attention-deficit hyperactivity disorder (ADHD) is a common mental disorder in childhood, with a reported 5% global prevalence rate. The project uses Virtual Reality (VR) technology to help children improve their concentration in order to mitigate some of the various deficiencies in existing rehabilitation methods. The research aims to apply the interactive features of VR technologies and to combine them with psychological rehabilitation training technology. The research also uses Electroencephalography (EEG) brain electricity image technology for real-time information feedback. The mobile application can receive the EEG data with visualization to assist medical staff and patients' families in evaluating the treatment. The research designs a therapy training system without physical space restriction. It is easy to deploy and can be a highly customizable rehabilitation process.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VR-based Context Priming to Increase Student Engagement and Academic Performance 基于vr的情境启动提高学生参与度和学习成绩
Daniel Hawes, A. Arya
Research suggests that virtual environments can be designed to increase engagement and performance with many cognitive tasks. This paper compares the efficacy of specifically designed 3D environments intended to prime these effects within Virtual Reality (VR). A 27-minute seminar “The Creative Process of Making an Animated Movie” was presented to 51 participants within three VR learning spaces: two prime and one no-prime. The prime conditions included two situated learning environments; an animation studio and a theatre with animation artifacts vs. the no-prime: theatre without artifacts. Increased academic performance was observed in both prime conditions. A UX survey was also completed.
研究表明,虚拟环境可以被设计成提高许多认知任务的参与度和表现。本文比较了在虚拟现实(VR)中专门设计的旨在启动这些效果的3D环境的功效。在三个虚拟现实学习空间(两个素数和一个非素数)中,51名参与者参加了27分钟的“制作动画电影的创作过程”研讨会。启动条件包括两个情境学习环境;一个动画工作室和一个有动画文物的剧院vs.没有文物的剧院。在两种初始条件下,学生的学习成绩都有所提高。用户体验调查也已完成。
{"title":"VR-based Context Priming to Increase Student Engagement and Academic Performance","authors":"Daniel Hawes, A. Arya","doi":"10.1109/VRW55335.2022.00196","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00196","url":null,"abstract":"Research suggests that virtual environments can be designed to increase engagement and performance with many cognitive tasks. This paper compares the efficacy of specifically designed 3D environments intended to prime these effects within Virtual Reality (VR). A 27-minute seminar “The Creative Process of Making an Animated Movie” was presented to 51 participants within three VR learning spaces: two prime and one no-prime. The prime conditions included two situated learning environments; an animation studio and a theatre with animation artifacts vs. the no-prime: theatre without artifacts. Increased academic performance was observed in both prime conditions. A UX survey was also completed.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131411488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
View-Adaptive Asymmetric Image Detail Enhancement for 360-degree Stereoscopic VR Content 360度立体VR内容的自适应非对称图像细节增强
Kin-Ming Wong
We present a simple VR-specific image detail enhancement method that improves the viewing experience of 360-degree stereoscopic photographed VR contents. By exploiting the fusion characteristics of binocular vision, we propose an asymmetric process that applies detail enhancement to one single image channel only. Our method can dynamically apply the enhancement in a view-adaptive fashion in real-time on most low-cost standalone VR headsets. We discuss the benefits of this method with respect to authoring possibilities, storage and bandwidth issues of photographed VR contents.
我们提出了一种简单的VR特定图像细节增强方法,可以改善360度立体拍摄的VR内容的观看体验。利用双目视觉的融合特性,提出了一种仅对单个图像通道进行细节增强的非对称处理方法。我们的方法可以在大多数低成本的独立VR头显上实时动态地应用视图自适应方式的增强。我们讨论了这种方法在拍摄VR内容的创作可能性、存储和带宽问题方面的好处。
{"title":"View-Adaptive Asymmetric Image Detail Enhancement for 360-degree Stereoscopic VR Content","authors":"Kin-Ming Wong","doi":"10.1109/VRW55335.2022.00012","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00012","url":null,"abstract":"We present a simple VR-specific image detail enhancement method that improves the viewing experience of 360-degree stereoscopic photographed VR contents. By exploiting the fusion characteristics of binocular vision, we propose an asymmetric process that applies detail enhancement to one single image channel only. Our method can dynamically apply the enhancement in a view-adaptive fashion in real-time on most low-cost standalone VR headsets. We discuss the benefits of this method with respect to authoring possibilities, storage and bandwidth issues of photographed VR contents.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127050941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ARTFM: Augmented Reality Visualization of Tool Functionality Manuals in Operating Rooms ARTFM:手术室工具功能手册的增强现实可视化
Constantin Kleinbeck, Hannah Schieber, S. Andress, C. Krautz, Daniel Roth
Error-free surgical procedures are crucial for a patient's health. However, with the increasing complexity and variety of surgical instruments, it is difficult for clinical staff to acquire detailed assembly and usage knowledge leading to errors in process and preparation steps. Yet, the gold standard in retrieving necessary information when problems occur is to get the paperbased manual. Reading through the necessary instructions is time-consuming and decreases care quality. We propose ARTFM, a process integrated manual, highlighting the correct parts needed, their location, and step-by-step instructions to combine the instrument using an augmented reality head-mounted display.
无差错的外科手术对病人的健康至关重要。然而,随着手术器械的复杂性和种类的增加,临床工作人员很难掌握详细的装配和使用知识,导致工艺和准备步骤出现错误。然而,当问题发生时,获取必要信息的黄金标准是获得纸质手册。通读必要的说明既费时又降低护理质量。我们提出ARTFM,这是一个过程集成手册,突出了所需的正确部件,它们的位置,以及使用增强现实头戴式显示器组合仪器的逐步说明。
{"title":"ARTFM: Augmented Reality Visualization of Tool Functionality Manuals in Operating Rooms","authors":"Constantin Kleinbeck, Hannah Schieber, S. Andress, C. Krautz, Daniel Roth","doi":"10.1109/VRW55335.2022.00219","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00219","url":null,"abstract":"Error-free surgical procedures are crucial for a patient's health. However, with the increasing complexity and variety of surgical instruments, it is difficult for clinical staff to acquire detailed assembly and usage knowledge leading to errors in process and preparation steps. Yet, the gold standard in retrieving necessary information when problems occur is to get the paperbased manual. Reading through the necessary instructions is time-consuming and decreases care quality. We propose ARTFM, a process integrated manual, highlighting the correct parts needed, their location, and step-by-step instructions to combine the instrument using an augmented reality head-mounted display.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133752995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1