{"title":"Estimating Similarity between Visual and Long Wave Infrared patches using Siamese CNN","authors":"C. S. Jyothi, B. Sandhya","doi":"10.1109/ICETCI51973.2021.9574058","DOIUrl":null,"url":null,"abstract":"Image matching is the process of identifying correspondences between same scene images that differ due to different acquisition parameters such as illumination, viewpoint, or noise. Image patch matching involves computing similarity between the patches based on content invariant to various photometric or geometric variations. Our objective is to design a convolution neural network that computes similarity between visual and infrared image patches of same scene. Similarities of images are measured from the feature maps that are extracted from raw patches. A model is developed that maps the patch to low-dimensional feature vector and similarity is calculated using a fully connected layer which outputs the distance between patches. Threshold is applied on the similarity resulting ‘1’ for similar patches and ‘0’ for dis-similar patches. Siamese CNN architecture based on transfer learning with regression is built with convolution trained and tested for patch similarity. Network model is trained with illumination varying patches of Hpatches dataset and are evaluated with a dataset of corresponding visual and long wave infrared images.","PeriodicalId":281877,"journal":{"name":"2021 International Conference on Emerging Techniques in Computational Intelligence (ICETCI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Emerging Techniques in Computational Intelligence (ICETCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICETCI51973.2021.9574058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image matching is the process of identifying correspondences between same scene images that differ due to different acquisition parameters such as illumination, viewpoint, or noise. Image patch matching involves computing similarity between the patches based on content invariant to various photometric or geometric variations. Our objective is to design a convolution neural network that computes similarity between visual and infrared image patches of same scene. Similarities of images are measured from the feature maps that are extracted from raw patches. A model is developed that maps the patch to low-dimensional feature vector and similarity is calculated using a fully connected layer which outputs the distance between patches. Threshold is applied on the similarity resulting ‘1’ for similar patches and ‘0’ for dis-similar patches. Siamese CNN architecture based on transfer learning with regression is built with convolution trained and tested for patch similarity. Network model is trained with illumination varying patches of Hpatches dataset and are evaluated with a dataset of corresponding visual and long wave infrared images.