Xihe: a 3D vision-based lighting estimation framework for mobile augmented reality

Yiqin Zhao, Tian Guo
{"title":"Xihe: a 3D vision-based lighting estimation framework for mobile augmented reality","authors":"Yiqin Zhao, Tian Guo","doi":"10.1145/3458864.3467886","DOIUrl":null,"url":null,"abstract":"Omnidirectional lighting provides the foundation for achieving spatially-variant photorealistic 3D rendering, a desirable property for mobile augmented reality applications. However, in practice, estimating omnidirectional lighting can be challenging due to limitations such as partial panoramas of the rendering positions, and the inherent environment lighting and mobile user dynamics. A new opportunity arises recently with the advancements in mobile 3D vision, including built-in high-accuracy depth sensors and deep learning-powered algorithms, which provide the means to better sense and understand the physical surroundings. Centering the key idea of 3D vision, in this work, we design an edge-assisted framework called Xihe to provide mobile AR applications the ability to obtain accurate omnidirectional lighting estimation in real time. Specifically, we develop a novel sampling technique that efficiently compresses the raw point cloud input generated at the mobile device. This technique is derived based on our empirical analysis of a recent 3D indoor dataset and plays a key role in our 3D vision-based lighting estimator pipeline design. To achieve the realtime goal, we develop a tailored GPU pipeline for on-device point cloud processing and use an encoding technique that reduces network transmitted bytes. Finally, we present an adaptive triggering strategy that allows Xihe to skip unnecessary lighting estimations and a practical way to provide temporal coherent rendering integration with the mobile AR ecosystem. We evaluate both the lighting estimation accuracy and time of Xihe using a reference mobile application developed with Xihe's APIs. Our results show that Xihe takes as fast as 20.67ms per lighting estimation and achieves 9.4% better estimation accuracy than a state-of-the-art neural network.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3458864.3467886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Omnidirectional lighting provides the foundation for achieving spatially-variant photorealistic 3D rendering, a desirable property for mobile augmented reality applications. However, in practice, estimating omnidirectional lighting can be challenging due to limitations such as partial panoramas of the rendering positions, and the inherent environment lighting and mobile user dynamics. A new opportunity arises recently with the advancements in mobile 3D vision, including built-in high-accuracy depth sensors and deep learning-powered algorithms, which provide the means to better sense and understand the physical surroundings. Centering the key idea of 3D vision, in this work, we design an edge-assisted framework called Xihe to provide mobile AR applications the ability to obtain accurate omnidirectional lighting estimation in real time. Specifically, we develop a novel sampling technique that efficiently compresses the raw point cloud input generated at the mobile device. This technique is derived based on our empirical analysis of a recent 3D indoor dataset and plays a key role in our 3D vision-based lighting estimator pipeline design. To achieve the realtime goal, we develop a tailored GPU pipeline for on-device point cloud processing and use an encoding technique that reduces network transmitted bytes. Finally, we present an adaptive triggering strategy that allows Xihe to skip unnecessary lighting estimations and a practical way to provide temporal coherent rendering integration with the mobile AR ecosystem. We evaluate both the lighting estimation accuracy and time of Xihe using a reference mobile application developed with Xihe's APIs. Our results show that Xihe takes as fast as 20.67ms per lighting estimation and achieves 9.4% better estimation accuracy than a state-of-the-art neural network.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Xihe:一个基于3D视觉的移动增强现实照明估计框架
全向照明为实现空间变化的逼真3D渲染提供了基础,这是移动增强现实应用程序的理想属性。然而,在实践中,由于渲染位置的局部全景、固有的环境照明和移动用户动态等限制,估计全向照明可能具有挑战性。最近,随着移动3D视觉技术的进步,包括内置高精度深度传感器和深度学习算法,新的机会出现了,这提供了更好地感知和理解物理环境的手段。围绕3D视觉的核心思想,在这项工作中,我们设计了一个名为Xihe的边缘辅助框架,为移动AR应用程序提供实时准确的全方位照明估计能力。具体来说,我们开发了一种新的采样技术,可以有效地压缩在移动设备上生成的原始点云输入。该技术是基于我们对最近的3D室内数据集的实证分析得出的,在我们基于3D视觉的照明估计器管道设计中起着关键作用。为了实现实时目标,我们为设备上的点云处理开发了定制的GPU管道,并使用了减少网络传输字节的编码技术。最后,我们提出了一种自适应触发策略,该策略允许Xihe跳过不必要的照明估计,并提供了一种实用的方法来提供与移动AR生态系统的时间相干渲染集成。我们使用使用西河的api开发的参考移动应用程序来评估西河的照明估计精度和时间。我们的研究结果表明,Xihe每次照明估计的速度高达20.67ms,并且比最先进的神经网络的估计精度提高了9.4%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Open source RAN slicing on POWDER: a top-to-bottom O-RAN use case Measuring forest carbon with mobile phones ThingSpire OS: a WebAssembly-based IoT operating system for cloud-edge integration SOS: isolated health monitoring system to save our satellites Acoustic ruler using wireless earbud
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1