Improved Modeling of 3D Shapes with Multi-view Depth Maps

Kamal Gupta, S. Jabbireddy, Ketul Shah, Abhinav Shrivastava, Matthias Zwicker
{"title":"Improved Modeling of 3D Shapes with Multi-view Depth Maps","authors":"Kamal Gupta, S. Jabbireddy, Ketul Shah, Abhinav Shrivastava, Matthias Zwicker","doi":"10.1109/3DV50981.2020.00017","DOIUrl":null,"url":null,"abstract":"We present a simple yet effective general-purpose framework for modeling 3D shapes by leveraging recent advances in 2D image generation using CNNs. Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects. Our simple encoder-decoder framework, comprised of a novel identity encoder and class-conditional viewpoint generator, generates 3D consistent depth maps. Our experimental results demonstrate the two-fold advantage of our approach. First, we can directly borrow architectures that work well in the 2D image domain to 3D. Second, we can effectively generate high-resolution 3D shapes with low computational memory. Our quantitative evaluations show that our method is superior to existing depth map methods for reconstructing and synthesizing 3D objects and is competitive with other representations, such as point clouds, voxel grids, and implicit functions. Code and other material will be made available at http://multiview-shapes. umiacs.io.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"10 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on 3D Vision (3DV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV50981.2020.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

We present a simple yet effective general-purpose framework for modeling 3D shapes by leveraging recent advances in 2D image generation using CNNs. Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects. Our simple encoder-decoder framework, comprised of a novel identity encoder and class-conditional viewpoint generator, generates 3D consistent depth maps. Our experimental results demonstrate the two-fold advantage of our approach. First, we can directly borrow architectures that work well in the 2D image domain to 3D. Second, we can effectively generate high-resolution 3D shapes with low computational memory. Our quantitative evaluations show that our method is superior to existing depth map methods for reconstructing and synthesizing 3D objects and is competitive with other representations, such as point clouds, voxel grids, and implicit functions. Code and other material will be made available at http://multiview-shapes. umiacs.io.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进的三维形状建模与多视图深度图
我们提出了一个简单而有效的通用框架,通过利用使用cnn的2D图像生成的最新进展来建模3D形状。仅使用对象的单个深度图像,我们就可以输出3D对象的密集多视图深度图表示。我们简单的编码器-解码器框架,由一个新颖的身份编码器和类条件视点生成器组成,生成3D一致的深度图。我们的实验结果证明了我们的方法的双重优势。首先,我们可以直接将在2D图像域中工作良好的架构借用到3D图像域中。其次,我们可以在低计算内存的情况下有效地生成高分辨率的3D形状。我们的定量评估表明,我们的方法优于现有的深度图方法,用于重建和合成3D物体,并且与其他表示(如点云,体素网格和隐式函数)具有竞争力。代码和其他材料将在http://multiview-shapes上提供。umiacs.io。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Screen-space Regularization on Differentiable Rasterization Motion Annotation Programs: A Scalable Approach to Annotating Kinematic Articulations in Large 3D Shape Collections Two-Stage Relation Constraint for Semantic Segmentation of Point Clouds Time Shifted IMU Preintegration for Temporal Calibration in Incremental Visual-Inertial Initialization KeystoneDepth: History in 3D
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1