{"title":"基于csr的低照度可见光与红外图像融合方法","authors":"N. Ma, Y. Cao, Z. Zhang, Y. Fan, M. Ding","doi":"10.1017/aer.2023.51","DOIUrl":null,"url":null,"abstract":"\n Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised through the reconstruction results of the fusion texture and structure layers. In the experimental simulation section, a series of visible and infrared registered images including aerial targets are adopted to evaluate the proposed algorithm. Experimental results demonstrates that the proposed method increases image qualities in low illumination conditions effectively and can enhance the object details, which has better performance than traditional methods.","PeriodicalId":22567,"journal":{"name":"The Aeronautical Journal (1968)","volume":"100 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid\",\"authors\":\"N. Ma, Y. Cao, Z. Zhang, Y. Fan, M. Ding\",\"doi\":\"10.1017/aer.2023.51\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised through the reconstruction results of the fusion texture and structure layers. In the experimental simulation section, a series of visible and infrared registered images including aerial targets are adopted to evaluate the proposed algorithm. Experimental results demonstrates that the proposed method increases image qualities in low illumination conditions effectively and can enhance the object details, which has better performance than traditional methods.\",\"PeriodicalId\":22567,\"journal\":{\"name\":\"The Aeronautical Journal (1968)\",\"volume\":\"100 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Aeronautical Journal (1968)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/aer.2023.51\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Aeronautical Journal (1968)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/aer.2023.51","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid
Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised through the reconstruction results of the fusion texture and structure layers. In the experimental simulation section, a series of visible and infrared registered images including aerial targets are adopted to evaluate the proposed algorithm. Experimental results demonstrates that the proposed method increases image qualities in low illumination conditions effectively and can enhance the object details, which has better performance than traditional methods.