首页 > 最新文献

Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
Learning-based Estimation of 6-DoF Camera Poses from Partial Observation of Large Objects for Mobile AR* 基于学习的移动AR大物体局部观测六自由度相机姿态估计*
Jean-Pierre Lomaliza, Hanhoon Park
We propose a method that estimates 6-DoF camera pose from a partially visible large object, by exploiting information of its subparts that are detected using a state-of-the-art convolutional neural network (CNN). The trained CNN outputs two-dimensional bounding boxes around subparts and associated classes. Information from detection is then fed to a deep neural network that regresses to camera's 6-DoF poses. Experimental results show that the proposed method is more robust to occlusions than conventional learning-based methods.
我们提出了一种方法,通过利用使用最先进的卷积神经网络(CNN)检测到的子部分信息,从部分可见的大型物体中估计6自由度相机姿态。训练后的CNN在子部件和相关类周围输出二维边界框。然后,来自检测的信息被馈送到一个深度神经网络,该网络会回归到相机的6自由度姿势。实验结果表明,该方法对遮挡的鲁棒性优于传统的基于学习的方法。
{"title":"Learning-based Estimation of 6-DoF Camera Poses from Partial Observation of Large Objects for Mobile AR*","authors":"Jean-Pierre Lomaliza, Hanhoon Park","doi":"10.1145/3359996.3364718","DOIUrl":"https://doi.org/10.1145/3359996.3364718","url":null,"abstract":"We propose a method that estimates 6-DoF camera pose from a partially visible large object, by exploiting information of its subparts that are detected using a state-of-the-art convolutional neural network (CNN). The trained CNN outputs two-dimensional bounding boxes around subparts and associated classes. Information from detection is then fed to a deep neural network that regresses to camera's 6-DoF poses. Experimental results show that the proposed method is more robust to occlusions than conventional learning-based methods.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130996008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Virtual-Reality Fire Extinguisher with Haptic Feedback 带有触觉反馈的交互式虚拟现实灭火器
Sang-Woo Seo, Seungjoon Kwon, Waseem Hassan, A. Talhan, Seokhee Jeon
We present an interactive virtual-reality (VR) fire extinguisher that provides both realistic viewing using a head-mounted display (HMD) and kinesthetic experiences using a pneumatic muscle and vibrotactile transducer. The VR fire extinguisher is designed to train people to use a fire extinguisher skillfully in real fire situations. We seamlessly integrate three technologies: VR, object motion tracking, and haptic feedback. A fire scene is immersed in the HMD, and a motion tracker is used to replicate a real designed object into the virtual environment to realize augmented reality. In addition, when the handle of the fire extinguisher is squeezed to release the extinguishing agent, the haptic device generates both vibrotactile and air flow tactile feedback signals, providing the same experience as that obtained while using a real fire extinguisher.
我们提出了一种交互式虚拟现实(VR)灭火器,它使用头戴式显示器(HMD)提供逼真的视觉效果,并使用气动肌肉和振动触觉传感器提供动觉体验。VR灭火器旨在训练人们在真实的火灾情况下熟练使用灭火器。我们无缝集成了三项技术:VR,物体运动跟踪和触觉反馈。在HMD中浸入火灾场景,利用运动跟踪器将真实设计的物体复制到虚拟环境中,实现增强现实。此外,当挤压灭火器手柄释放灭火剂时,触觉装置同时产生振动触觉和气流触觉反馈信号,提供与使用真实灭火器相同的体验。
{"title":"Interactive Virtual-Reality Fire Extinguisher with Haptic Feedback","authors":"Sang-Woo Seo, Seungjoon Kwon, Waseem Hassan, A. Talhan, Seokhee Jeon","doi":"10.1145/3359996.3364725","DOIUrl":"https://doi.org/10.1145/3359996.3364725","url":null,"abstract":"We present an interactive virtual-reality (VR) fire extinguisher that provides both realistic viewing using a head-mounted display (HMD) and kinesthetic experiences using a pneumatic muscle and vibrotactile transducer. The VR fire extinguisher is designed to train people to use a fire extinguisher skillfully in real fire situations. We seamlessly integrate three technologies: VR, object motion tracking, and haptic feedback. A fire scene is immersed in the HMD, and a motion tracker is used to replicate a real designed object into the virtual environment to realize augmented reality. In addition, when the handle of the fire extinguisher is squeezed to release the extinguishing agent, the haptic device generates both vibrotactile and air flow tactile feedback signals, providing the same experience as that obtained while using a real fire extinguisher.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126195871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Comparison of Human and Machine-Generated Voice 人类和机器产生的声音的比较
Amal Abdulrahman, Deborah Richards, A. Bilgin
This study investigates the influence of a virtual human (VH) with recorded human voice vs VH with a machine-generated voice (text-to-speech) on building trust and working alliance. We measured the co-presence perception to understand the impact of VH's perception on building the human-VH relationship. The results revealed no differences between the two types of voices on co-presence perception, trust or working alliance.
本研究探讨了录制人类声音的虚拟人(VH)与机器生成声音(文本到语音)的虚拟人(VH)对建立信任和工作联盟的影响。我们测量了共同存在感知,以了解VH感知对建立人类-VH关系的影响。结果显示,两种类型的声音在共同存在感知、信任或工作联盟方面没有差异。
{"title":"A Comparison of Human and Machine-Generated Voice","authors":"Amal Abdulrahman, Deborah Richards, A. Bilgin","doi":"10.1145/3359996.3364754","DOIUrl":"https://doi.org/10.1145/3359996.3364754","url":null,"abstract":"This study investigates the influence of a virtual human (VH) with recorded human voice vs VH with a machine-generated voice (text-to-speech) on building trust and working alliance. We measured the co-presence perception to understand the impact of VH's perception on building the human-VH relationship. The results revealed no differences between the two types of voices on co-presence perception, trust or working alliance.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"15 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120921190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Content-Aware Approach for Analysing Eye Movement Patterns in Virtual Reality 虚拟现实中眼动模式分析的内容感知方法
Xiang-Zhi Cao, Richard Skarbez, Zhen He, H. Duh
Observing eye movement is a direct way to analyse human’s attention. Eye movement patterns in normal environment have been widely investigated. In virtual reality (VR) environment, previous studies of eye movement patterns are mainly based on content-unrelated influential factors. Considering this issue, in this paper, a novel content-related factor is studied. One crucial kind of region of interest (ROI), namely vision-penetrable entrance, is chosen to analyse eye movement pattern differences. The results suggest that users show more interest in vision-penetrable entrances than in other regions. Furthermore, this kind of difference is identified as higher average density of fixation. As far as we know, this paper is the first attempt to study specific types of ROI in virtual reality environments. The method utilised in this paper can be applied in other ROI analysis.
观察眼球运动是分析人的注意力的一种直接方法。正常环境下的眼动模式已被广泛研究。在虚拟现实(VR)环境中,以往的眼动模式研究主要基于与内容无关的影响因素。针对这一问题,本文研究了一种新的内容相关因子。选择一种关键的感兴趣区域(ROI),即视觉穿透入口,来分析眼动模式的差异。结果表明,与其他地区相比,用户对视觉穿透入口更感兴趣。此外,这种差异被认为是更高的平均固定密度。据我们所知,本文是第一次尝试研究虚拟现实环境中具体类型的ROI。本文所采用的方法可以应用于其他ROI分析。
{"title":"A Content-Aware Approach for Analysing Eye Movement Patterns in Virtual Reality","authors":"Xiang-Zhi Cao, Richard Skarbez, Zhen He, H. Duh","doi":"10.1145/3359996.3364723","DOIUrl":"https://doi.org/10.1145/3359996.3364723","url":null,"abstract":"Observing eye movement is a direct way to analyse human’s attention. Eye movement patterns in normal environment have been widely investigated. In virtual reality (VR) environment, previous studies of eye movement patterns are mainly based on content-unrelated influential factors. Considering this issue, in this paper, a novel content-related factor is studied. One crucial kind of region of interest (ROI), namely vision-penetrable entrance, is chosen to analyse eye movement pattern differences. The results suggest that users show more interest in vision-penetrable entrances than in other regions. Furthermore, this kind of difference is identified as higher average density of fixation. As far as we know, this paper is the first attempt to study specific types of ROI in virtual reality environments. The method utilised in this paper can be applied in other ROI analysis.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114165214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating a Physical Dial as a Measurement Tool for Cybersickness in Virtual Reality 研究物理表盘作为虚拟现实中晕动症的测量工具
Natalie McHugh, Sungchul Jung, S. Hoermann, R. Lindeman
This study explores ways to increase comfort in Virtual Reality by minimizing cybersickness. Cybersickness is related to classical motion sickness and causes unwanted symptoms when using immersive technologies. We developed a dial interface to accurately capture momentary user cybersickness and feed this information back to the user. Using a seated VR roller coaster environment, we found that the dial is significantly positively correlated with post-immersion questionnaires and is a valid tool compared to verbal rating approaches.
这项研究探讨了通过减少晕动症来增加虚拟现实舒适度的方法。晕屏病与典型的晕动病有关,在使用沉浸式技术时,会引起不想要的症状。我们开发了一个拨号界面来准确地捕捉用户瞬间的晕动症,并将这些信息反馈给用户。通过坐着的VR过山车环境,我们发现表盘与沉浸后问卷显著正相关,与口头评分方法相比,表盘是一种有效的工具。
{"title":"Investigating a Physical Dial as a Measurement Tool for Cybersickness in Virtual Reality","authors":"Natalie McHugh, Sungchul Jung, S. Hoermann, R. Lindeman","doi":"10.1145/3359996.3364259","DOIUrl":"https://doi.org/10.1145/3359996.3364259","url":null,"abstract":"This study explores ways to increase comfort in Virtual Reality by minimizing cybersickness. Cybersickness is related to classical motion sickness and causes unwanted symptoms when using immersive technologies. We developed a dial interface to accurately capture momentary user cybersickness and feed this information back to the user. Using a seated VR roller coaster environment, we found that the dial is significantly positively correlated with post-immersion questionnaires and is a valid tool compared to verbal rating approaches.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125042020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Signifier-Based Immersive and Interactive 3D Modeling 基于能指的沉浸式交互式3D建模
J. A. Bærentzen, J. Frisvad, K. Singh
Interactive 3D modeling in VR is both aided by immersive 3D input and hampered by model disjunct, tool-based or selection-action user interfaces. We propose a direct, signifier-based approach to the popular interactive technique of creating 3D models through a sequence of extrusion operations. Motivated by handles and signifiers that communicate the affordances of everyday objects, we define a set of design principles for an immersive, signifier-based modeling interface. We then present an interactive 3D modeling system where all modeling affordances are modelessly reachable and signified on the model itself.
VR中的交互式3D建模既受到沉浸式3D输入的帮助,也受到模型分离、基于工具或选择操作用户界面的阻碍。我们提出了一种直接的,基于符号的方法,通过一系列挤压操作来创建3D模型的流行交互技术。受手柄和能指的启发,我们为一个沉浸式的、基于能指的建模界面定义了一套设计原则。然后,我们提出了一个交互式3D建模系统,其中所有建模能力都是无模型可及的,并在模型本身上表示。
{"title":"Signifier-Based Immersive and Interactive 3D Modeling","authors":"J. A. Bærentzen, J. Frisvad, K. Singh","doi":"10.1145/3359996.3364257","DOIUrl":"https://doi.org/10.1145/3359996.3364257","url":null,"abstract":"Interactive 3D modeling in VR is both aided by immersive 3D input and hampered by model disjunct, tool-based or selection-action user interfaces. We propose a direct, signifier-based approach to the popular interactive technique of creating 3D models through a sequence of extrusion operations. Motivated by handles and signifiers that communicate the affordances of everyday objects, we define a set of design principles for an immersive, signifier-based modeling interface. We then present an interactive 3D modeling system where all modeling affordances are modelessly reachable and signified on the model itself.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130164500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optical-Reflection Type 3D Augmented Reality Mirrors 光学反射型3D增强现实镜
Gun A. Lee, H. Park, M. Billinghurst
Augmented Reality (AR) mirrors can show virtual objects overlaid onto the physical world reflected in the mirror. Optical-reflection type AR mirror displays use half-silvered mirrors attached in front of a digital display. However, prior work suffered from visual depth mismatch between the optical reflection of the 3D physical space and 2D images displayed on the surface of the mirror. In this research, we use 3D visualisation to overcome this problem and improve the user experience by providing better depth perception for watching and interacting with the content displayed on an AR mirror. As a proof of concept, we developed two prototype optical-reflection type 3D AR mirror displays, one using glasses-free multi-view 3D display and another using a head tracked 3D stereoscopic display that supports hand gesture interaction.
增强现实(AR)镜子可以显示虚拟物体叠加在镜子中反射的物理世界上。光学反射型AR反射镜显示器使用在数字显示器前面附加的半镀银镜子。然而,先前的工作存在三维物理空间的光学反射与镜子表面显示的二维图像之间的视觉深度不匹配的问题。在这项研究中,我们使用3D可视化来克服这个问题,并通过提供更好的深度感知来观看和与AR镜子上显示的内容交互来改善用户体验。作为概念验证,我们开发了两个原型光学反射型3D AR镜子显示器,一个使用裸眼多视图3D显示器,另一个使用头部跟踪3D立体显示器,支持手势交互。
{"title":"Optical-Reflection Type 3D Augmented Reality Mirrors","authors":"Gun A. Lee, H. Park, M. Billinghurst","doi":"10.1145/3359996.3364782","DOIUrl":"https://doi.org/10.1145/3359996.3364782","url":null,"abstract":"Augmented Reality (AR) mirrors can show virtual objects overlaid onto the physical world reflected in the mirror. Optical-reflection type AR mirror displays use half-silvered mirrors attached in front of a digital display. However, prior work suffered from visual depth mismatch between the optical reflection of the 3D physical space and 2D images displayed on the surface of the mirror. In this research, we use 3D visualisation to overcome this problem and improve the user experience by providing better depth perception for watching and interacting with the content displayed on an AR mirror. As a proof of concept, we developed two prototype optical-reflection type 3D AR mirror displays, one using glasses-free multi-view 3D display and another using a head tracked 3D stereoscopic display that supports hand gesture interaction.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129618442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Out-of-body Locomotion: Vectionless Navigation with a Continuous Avatar Representation 出体运动:具有连续化身表示的无向量导航
Nathan Navarro Griffin, Eelke Folmer
Teleportation is a popular and low risk means of navigating in VR. Because teleportation discontinuously translates the user’s viewpoint, no optical flow is generated that could lead to vection-induced VR sickness. However, instant viewpoint translations and resulting discontinuous avatar representation is not only detrimental to presence and spatial awareness but also presents a challenge for gameplay design–particularly for multiplayer games. We compare out-of-body locomotion, a hybrid viewpoint technique that lets users seamlessly switch between a first-person and third-person avatar view, to traditional pointer-based teleportation. While in third-person, if the user doesn’t move, the camera remains stationary to avoid any optical flow generation. Third-person also lets users precisely and continuously navigate their avatar without risk of getting VR sick. The viewpoint automatically switches back to first-person as soon the users breaks line of sight with their avatar or the user requests to rejoin the avatar with a button press. A user study compares out-of-body locomotion to teleportation with participants (n=22) traversing an obstacle course. Results show that out-of-body locomotion requires significantly fewer (67%) viewpoint transitions than teleportation while there was no significant difference in performance. In addition to being able to offer a continuous avatar representation, participants also deemed out-of-body locomotion to be faster.
在VR中,传送是一种流行且低风险的导航方式。由于隐形传输不连续地转换用户的视点,因此不会产生可能导致矢量诱导的VR病的光流。然而,即时的视点转换和由此产生的不连续的角色表现不仅不利于存在感和空间意识,而且对玩法设计提出了挑战,特别是对多人游戏而言。我们比较了出体运动(一种让用户在第一人称和第三人称视角之间无缝切换的混合视角技术)和传统的基于指针的瞬间移动。而在第三人称中,如果用户不移动,相机保持静止以避免产生任何光流。第三人称还可以让用户精确、持续地导航他们的虚拟形象,而不会有VR生病的风险。当用户脱离虚拟角色的视线或用户按下按钮要求重新加入虚拟角色时,视点就会自动切换回第一人称。在一项用户研究中,参与者(n=22)通过障碍赛道,将出体运动与心灵传送进行了比较。结果表明,离体运动所需的视点转换次数明显少于隐形传态(67%),但在性能上没有显著差异。除了能够提供连续的化身表现外,参与者还认为出体运动更快。
{"title":"Out-of-body Locomotion: Vectionless Navigation with a Continuous Avatar Representation","authors":"Nathan Navarro Griffin, Eelke Folmer","doi":"10.1145/3359996.3364243","DOIUrl":"https://doi.org/10.1145/3359996.3364243","url":null,"abstract":"Teleportation is a popular and low risk means of navigating in VR. Because teleportation discontinuously translates the user’s viewpoint, no optical flow is generated that could lead to vection-induced VR sickness. However, instant viewpoint translations and resulting discontinuous avatar representation is not only detrimental to presence and spatial awareness but also presents a challenge for gameplay design–particularly for multiplayer games. We compare out-of-body locomotion, a hybrid viewpoint technique that lets users seamlessly switch between a first-person and third-person avatar view, to traditional pointer-based teleportation. While in third-person, if the user doesn’t move, the camera remains stationary to avoid any optical flow generation. Third-person also lets users precisely and continuously navigate their avatar without risk of getting VR sick. The viewpoint automatically switches back to first-person as soon the users breaks line of sight with their avatar or the user requests to rejoin the avatar with a button press. A user study compares out-of-body locomotion to teleportation with participants (n=22) traversing an obstacle course. Results show that out-of-body locomotion requires significantly fewer (67%) viewpoint transitions than teleportation while there was no significant difference in performance. In addition to being able to offer a continuous avatar representation, participants also deemed out-of-body locomotion to be faster.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130889750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Interactive Indirect Illumination Using Mipmap-based Ray Marching and Local Means Replaced Denoising 使用基于mipmap的光线推进和局部方法替代去噪的交互间接照明
Bo Zhang, Kyoungsu Oh
An interactive, one-bounce, and indirect illumination algorithm, which considers indirect visibility, is introduced. First, a ray marching algorithm (MRM), which is based on a 3D mipmap hierarchy structure generated by voxelizing the scene to accelerate the ray-voxel intersection, is used. Second, the indirect images are denoised by iterating an improved, edge-avoiding filtering with a local means replacement (LMR) method. The implementation demonstrates that our solutions can efficiently generate high-quality global illumination images even in a fully dynamic scene.
提出了一种考虑间接可视性的交互式单弹间接照明算法。首先,采用射线推进算法(MRM),该算法基于场景体素化生成的三维mipmap层次结构来加速射线与体素的相交。其次,使用局部均值替换(LMR)方法迭代改进的边缘避免滤波,对间接图像进行去噪。实践表明,即使在完全动态的场景中,我们的解决方案也可以有效地生成高质量的全局照明图像。
{"title":"Interactive Indirect Illumination Using Mipmap-based Ray Marching and Local Means Replaced Denoising","authors":"Bo Zhang, Kyoungsu Oh","doi":"10.1145/3359996.3364777","DOIUrl":"https://doi.org/10.1145/3359996.3364777","url":null,"abstract":"An interactive, one-bounce, and indirect illumination algorithm, which considers indirect visibility, is introduced. First, a ray marching algorithm (MRM), which is based on a 3D mipmap hierarchy structure generated by voxelizing the scene to accelerate the ray-voxel intersection, is used. Second, the indirect images are denoised by iterating an improved, edge-avoiding filtering with a local means replacement (LMR) method. The implementation demonstrates that our solutions can efficiently generate high-quality global illumination images even in a fully dynamic scene.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128476884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real Time Point Cloud Self-Avatar With A Single RGB-D Camera 实时点云自头像与一个单一的RGB-D相机
Hela Ridha-Mahfoudhi, Nguyen Thong Dang
This paper presents a method for generating in real-time a self-avatar using a single RGB-D Camera. The self-avatar was presented under the form of a point cloud, retrieved thanks to a Kinect V2. The method included smoothing, filtering, segmenting and remapping point data presenting the user's body in real-time. The point cloud avatar in the third and the first person view can be generated.
本文提出了一种利用单个RGB-D相机实时生成自头像的方法。自我化身以点云的形式呈现,通过Kinect V2进行检索。该方法包括对实时呈现用户身体的点数据进行平滑、滤波、分割和重映射。可以生成第三人称和第一人称视图中的点云化身。
{"title":"Real Time Point Cloud Self-Avatar With A Single RGB-D Camera","authors":"Hela Ridha-Mahfoudhi, Nguyen Thong Dang","doi":"10.1145/3359996.3365041","DOIUrl":"https://doi.org/10.1145/3359996.3365041","url":null,"abstract":"This paper presents a method for generating in real-time a self-avatar using a single RGB-D Camera. The self-avatar was presented under the form of a point cloud, retrieved thanks to a Kinect V2. The method included smoothing, filtering, segmenting and remapping point data presenting the user's body in real-time. The point cloud avatar in the third and the first person view can be generated.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115488400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1