Jonghee Kim, Jinsu Kim, Seokeon Choi, Muhammad Abul Hasan, Changick Kim
{"title":"使用尺度自适应深度卷积特征的鲁棒模板匹配","authors":"Jonghee Kim, Jinsu Kim, Seokeon Choi, Muhammad Abul Hasan, Changick Kim","doi":"10.1109/APSIPA.2017.8282124","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a deep convolutional feature-based robust and efficient template matching method. The originality of the proposed method is that it is based on a scale-adaptive feature extraction approach. This approach is influenced by an observation that each layer in a CNN represents a different level of deep features of the actual image contents. In order to keep the features scalable, we extract deep feature vectors of the template and the input image adaptively from a layer of a CNN. By using such scalable and deep representation of the image contents, we attempt to solve the template matching by measuring the similarity between the features of the template and the input image using an efficient similarity measuring technique called normalized cross-correlation (NCC). Using NCC helps in avoiding redundant computations of adjacent patches caused by the sliding window approach. As a result, the proposed method achieves state-of-the-art template matching performance and lowers the computational cost significantly than the state-of- the-art methods in the literature.","PeriodicalId":142091,"journal":{"name":"2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Robust template matching using scale-adaptive deep convolutional features\",\"authors\":\"Jonghee Kim, Jinsu Kim, Seokeon Choi, Muhammad Abul Hasan, Changick Kim\",\"doi\":\"10.1109/APSIPA.2017.8282124\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a deep convolutional feature-based robust and efficient template matching method. The originality of the proposed method is that it is based on a scale-adaptive feature extraction approach. This approach is influenced by an observation that each layer in a CNN represents a different level of deep features of the actual image contents. In order to keep the features scalable, we extract deep feature vectors of the template and the input image adaptively from a layer of a CNN. By using such scalable and deep representation of the image contents, we attempt to solve the template matching by measuring the similarity between the features of the template and the input image using an efficient similarity measuring technique called normalized cross-correlation (NCC). Using NCC helps in avoiding redundant computations of adjacent patches caused by the sliding window approach. As a result, the proposed method achieves state-of-the-art template matching performance and lowers the computational cost significantly than the state-of- the-art methods in the literature.\",\"PeriodicalId\":142091,\"journal\":{\"name\":\"2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/APSIPA.2017.8282124\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSIPA.2017.8282124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust template matching using scale-adaptive deep convolutional features
In this paper, we propose a deep convolutional feature-based robust and efficient template matching method. The originality of the proposed method is that it is based on a scale-adaptive feature extraction approach. This approach is influenced by an observation that each layer in a CNN represents a different level of deep features of the actual image contents. In order to keep the features scalable, we extract deep feature vectors of the template and the input image adaptively from a layer of a CNN. By using such scalable and deep representation of the image contents, we attempt to solve the template matching by measuring the similarity between the features of the template and the input image using an efficient similarity measuring technique called normalized cross-correlation (NCC). Using NCC helps in avoiding redundant computations of adjacent patches caused by the sliding window approach. As a result, the proposed method achieves state-of-the-art template matching performance and lowers the computational cost significantly than the state-of- the-art methods in the literature.