Zhaowei Liu, Yung-Yao Chen, S. Hidayati, S. Chien, Feng-Chia Chang, K. Hua
{"title":"3D model retrieval based on deep Autoencoder neural networks","authors":"Zhaowei Liu, Yung-Yao Chen, S. Hidayati, S. Chien, Feng-Chia Chang, K. Hua","doi":"10.1109/ICSIGSYS.2017.7967059","DOIUrl":null,"url":null,"abstract":"The rapid growth of 3D model resources for 3D printing has created an urgent need for 3D model retrieval systems. Benefiting from the evolution of hardware devices, visualized 3D models can be easily rendered using a tablet computer or handheld mobile device. In this paper, we present a novel 3D model retrieval method involving view-based features and deep learning. Because 2D images are highly distinguishable, constructing a 3D model from multiple 2D views is one of the most common methods of 3D model retrieval. Normalization is typically challenging and time-consuming for view-based retrieval methods; however, this work utilized an unsupervised deep learning technique, called Autoencoder, to refine compact view-based features. Therefore, the proposed method is rotation-invariant, requiring only the normalization of the translation and the scale of the 3D models in the dataset. For robustness, we applied Fourier descriptors and Zernike moments to represent the 2D features. The experimental results testing our method on the online Princeton Shape Benchmark Dataset demonstrate more accurate retrieval performance than other existing methods.","PeriodicalId":212068,"journal":{"name":"2017 International Conference on Signals and Systems (ICSigSys)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Signals and Systems (ICSigSys)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIGSYS.2017.7967059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The rapid growth of 3D model resources for 3D printing has created an urgent need for 3D model retrieval systems. Benefiting from the evolution of hardware devices, visualized 3D models can be easily rendered using a tablet computer or handheld mobile device. In this paper, we present a novel 3D model retrieval method involving view-based features and deep learning. Because 2D images are highly distinguishable, constructing a 3D model from multiple 2D views is one of the most common methods of 3D model retrieval. Normalization is typically challenging and time-consuming for view-based retrieval methods; however, this work utilized an unsupervised deep learning technique, called Autoencoder, to refine compact view-based features. Therefore, the proposed method is rotation-invariant, requiring only the normalization of the translation and the scale of the 3D models in the dataset. For robustness, we applied Fourier descriptors and Zernike moments to represent the 2D features. The experimental results testing our method on the online Princeton Shape Benchmark Dataset demonstrate more accurate retrieval performance than other existing methods.