{"title":"A Novel Single Target Auto-annotation Algorithm for SAR Images Based on Pre-trained CNN Classifier","authors":"Moulay Idriss Bellil, Xiaojian Xu","doi":"10.1145/3318299.3318366","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) are extremely important building blocks for abstract deep learning algorithm constructs regarding visual interpretation especially when it comes to synthetic aperture radar (SAR) images. An ongoing research is being made in order to improve their accuracy forgetting about the undiscovered internals. CNNs are usually being used as black boxes that produce in a non-linear fashion abstract interpretations. In this paper, however, we propose a novel algorithm that shows where CNNs look in an image to provide the answer to the provided classification problem applied to SAR images. We provide also results as bounding boxes using only a pre-trained classification network and some post-processing. The algorithm uses a brute-force approach given a pre-trained neural network, it removes gradually lines of pixels and checks the effect on the resulting scores, and it post-processes the resulting scores to infer the most important region in a given input image. Although other attempts have been made in the literature to provide solutions to the problem, by reversing the convolutional map filters, they are limited in scope and generally fail to deal with a complex network such as the award winning Resnet. Our algorithm, in this category, is of significant usefulness, it bridges the gap between the object classification and object detection problems, opening new perspectives to eliminate the time-consuming task of manual object annotation.","PeriodicalId":164987,"journal":{"name":"International Conference on Machine Learning and Computing","volume":"726 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3318299.3318366","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional neural networks (CNNs) are extremely important building blocks for abstract deep learning algorithm constructs regarding visual interpretation especially when it comes to synthetic aperture radar (SAR) images. An ongoing research is being made in order to improve their accuracy forgetting about the undiscovered internals. CNNs are usually being used as black boxes that produce in a non-linear fashion abstract interpretations. In this paper, however, we propose a novel algorithm that shows where CNNs look in an image to provide the answer to the provided classification problem applied to SAR images. We provide also results as bounding boxes using only a pre-trained classification network and some post-processing. The algorithm uses a brute-force approach given a pre-trained neural network, it removes gradually lines of pixels and checks the effect on the resulting scores, and it post-processes the resulting scores to infer the most important region in a given input image. Although other attempts have been made in the literature to provide solutions to the problem, by reversing the convolutional map filters, they are limited in scope and generally fail to deal with a complex network such as the award winning Resnet. Our algorithm, in this category, is of significant usefulness, it bridges the gap between the object classification and object detection problems, opening new perspectives to eliminate the time-consuming task of manual object annotation.