V. Voronin, M. Zhdanova, N. Gapon, A. Alepko, A. Zelensky, E. Semenishchev
{"title":"Deep visible and thermal image fusion for enhancement visibility for surveillance application","authors":"V. Voronin, M. Zhdanova, N. Gapon, A. Alepko, A. Zelensky, E. Semenishchev","doi":"10.1117/12.2641857","DOIUrl":null,"url":null,"abstract":"The additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level deep fusion for visible and thermal information. We present the algorithm, combining information from visible cameras and thermal sensors based on the deep learning and parameterized model of logarithmic image processing (PLIP). The proposed neural network based on the principle of an autoencoder. We use an encoder to extract the features of images, and the fused image is obtained by a decoding network. The encoder consists of a convolutional layer and a dense block, which also consists of convolutional layers. Fusing images are in the decoder and the fusion layer operating to the principle of PLIP which close to the human visual system's perception. This fusion approach applied for surveillance application. Experimental results showed the effectiveness of the proposed algorithm.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"12 1","pages":"122710P - 122710P-6"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Security and Defence Quarterly","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2641857","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level deep fusion for visible and thermal information. We present the algorithm, combining information from visible cameras and thermal sensors based on the deep learning and parameterized model of logarithmic image processing (PLIP). The proposed neural network based on the principle of an autoencoder. We use an encoder to extract the features of images, and the fused image is obtained by a decoding network. The encoder consists of a convolutional layer and a dense block, which also consists of convolutional layers. Fusing images are in the decoder and the fusion layer operating to the principle of PLIP which close to the human visual system's perception. This fusion approach applied for surveillance application. Experimental results showed the effectiveness of the proposed algorithm.