{"title":"3D-VRVT: 3D Voxel Reconstruction from A Single Image with Vision Transformer","authors":"Xi Li, Ping Kuang","doi":"10.1109/ICCST53801.2021.00078","DOIUrl":null,"url":null,"abstract":"Deep CNN methods have shown very competitive performance in 3D voxel reconstruction from single-view synthetic clean-background images. However, how to generate the target object from a real-world image with clutter background is rarely studied. In this paper, we present a novel network named 3D-VRVT for 3D voxel reconstruction from a single image. Unlike pure CNN-based methods in the past, our 3D-VRVT extracts region features with Vision Transformer (ViT) encoder based on self-attention mechanism, and then a well-designed voxel decoder is used to generate three-dimensional voxel from the encoded image features. The experimental results show that our 3D-VRVT can reconstruct 3D voxel from both synthetic clean-background and real-world images effectively.","PeriodicalId":222463,"journal":{"name":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Culture-oriented Science & Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCST53801.2021.00078","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Deep CNN methods have shown very competitive performance in 3D voxel reconstruction from single-view synthetic clean-background images. However, how to generate the target object from a real-world image with clutter background is rarely studied. In this paper, we present a novel network named 3D-VRVT for 3D voxel reconstruction from a single image. Unlike pure CNN-based methods in the past, our 3D-VRVT extracts region features with Vision Transformer (ViT) encoder based on self-attention mechanism, and then a well-designed voxel decoder is used to generate three-dimensional voxel from the encoded image features. The experimental results show that our 3D-VRVT can reconstruct 3D voxel from both synthetic clean-background and real-world images effectively.