{"title":"Sketch-Based Shape Retrieval via Multi-view Attention and Generalized Similarity","authors":"Yongzhe Xu, Jiangchuan Hu, K. Zeng, Y. Gong","doi":"10.1109/ICDH.2018.00061","DOIUrl":null,"url":null,"abstract":"Sketch-based shape retrieval has received increasing attention in computer vision and computer graphics. It suffers from the challenge gap between 2D sketches and 3D shapes. In this paper, we propose a generalized similarity matching framework based on a multi-view attention network (MVAN), which can retrieve 3D shape that is most similar to the query sketch. In proposed approach, firstly we compute 2D projections of 3D shapes from multiple viewpoints and utilize a convolutional neural network to extract low level feature maps of these 2D projections. Secondly a multi-view attention network is designed to fuse the feature maps and forms a more accurate 3D shape representation. Meanwhile we use a CNN to extract the feature of sketches. Thirdly the similarity between sketches and 3D shapes is estimated via a generalized similarity model, which fuses some traditional similarity model into a generalized form and optimizes its parameters using a data-driven method. Finally we combine the MVAN and generalized similarity model into a unified network and train the model in an end-to-end manner. The experimental results on SHREC'13 and SHREC'14 sketch track benchmark datasets demonstrate that the proposed method can outperform state-of-the-art methods.","PeriodicalId":117854,"journal":{"name":"2018 7th International Conference on Digital Home (ICDH)","volume":"91 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th International Conference on Digital Home (ICDH)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDH.2018.00061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Sketch-based shape retrieval has received increasing attention in computer vision and computer graphics. It suffers from the challenge gap between 2D sketches and 3D shapes. In this paper, we propose a generalized similarity matching framework based on a multi-view attention network (MVAN), which can retrieve 3D shape that is most similar to the query sketch. In proposed approach, firstly we compute 2D projections of 3D shapes from multiple viewpoints and utilize a convolutional neural network to extract low level feature maps of these 2D projections. Secondly a multi-view attention network is designed to fuse the feature maps and forms a more accurate 3D shape representation. Meanwhile we use a CNN to extract the feature of sketches. Thirdly the similarity between sketches and 3D shapes is estimated via a generalized similarity model, which fuses some traditional similarity model into a generalized form and optimizes its parameters using a data-driven method. Finally we combine the MVAN and generalized similarity model into a unified network and train the model in an end-to-end manner. The experimental results on SHREC'13 and SHREC'14 sketch track benchmark datasets demonstrate that the proposed method can outperform state-of-the-art methods.