{"title":"基于生物启发的响尾蛇视觉机制的昼夜两级照明环境下的图像融合","authors":"Yong Wang, Hongmin Zou","doi":"10.1007/s42235-024-00496-5","DOIUrl":null,"url":null,"abstract":"<div><p>This study, grounded in Waxman fusion method, introduces an algorithm for the fusion of visible and infrared images, tailored to a two-level lighting environment, inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells' mechanism. The research presented here is segmented into three components. In the first segment, we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels: day and night. The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images. For the daytime images, where visible light information is predominant, we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel, respectively. Conversely, for nighttime images where infrared information takes precedence, the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel. The outcome is a pseudo-color fused image. The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information. These images are then compared with those obtained by six other methods cited in the relevant reference. The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency. Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results. Four sets of fused images did not perform as well as the comparison in the Q<sup>AB/F</sup> index. In conclusion, the fused images generated through the proposed method show superior performance in terms of scene detail, visual perception, and image sharpness when compared with their counterparts from other methods.</p></div>","PeriodicalId":614,"journal":{"name":"Journal of Bionic Engineering","volume":"21 3","pages":"1496 - 1510"},"PeriodicalIF":4.9000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image Fusion Based on Bioinspired Rattlesnake Visual Mechanism Under Lighting Environments of Day and Night Two Levels\",\"authors\":\"Yong Wang, Hongmin Zou\",\"doi\":\"10.1007/s42235-024-00496-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This study, grounded in Waxman fusion method, introduces an algorithm for the fusion of visible and infrared images, tailored to a two-level lighting environment, inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells' mechanism. The research presented here is segmented into three components. In the first segment, we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels: day and night. The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images. For the daytime images, where visible light information is predominant, we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel, respectively. Conversely, for nighttime images where infrared information takes precedence, the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel. The outcome is a pseudo-color fused image. The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information. These images are then compared with those obtained by six other methods cited in the relevant reference. The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency. Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results. Four sets of fused images did not perform as well as the comparison in the Q<sup>AB/F</sup> index. In conclusion, the fused images generated through the proposed method show superior performance in terms of scene detail, visual perception, and image sharpness when compared with their counterparts from other methods.</p></div>\",\"PeriodicalId\":614,\"journal\":{\"name\":\"Journal of Bionic Engineering\",\"volume\":\"21 3\",\"pages\":\"1496 - 1510\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Bionic Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s42235-024-00496-5\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Bionic Engineering","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s42235-024-00496-5","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
摘要 本研究以 Waxman 融合方法为基础,受响尾蛇视觉感受野数学模型和双模细胞机制的启发,介绍了一种针对两级光照环境的可见光和红外图像融合算法。本文介绍的研究分为三个部分。在第一部分中,我们设计了一个预处理模块来判断环境光照强度,并将光照环境分为白天和夜晚两个等级。第二部分提出了两种不同的网络结构,专门针对这些白天和夜间图像而设计。对于可见光信息占主导地位的白天图像,我们将 ON-VIS 信号和红外增强视觉信号分别输入 B 通道中 ON 中心感受野的中心激发区和周围抑制区。相反,对于红外信息占主导地位的夜间图像,则将 ON-IR 信号和视觉增强的红外信号分别输入 B 通道 ON 中心感受野的中心激发区和周围抑制区。其结果是伪彩色融合图像。第三部分采用五种不同的无参照图像质量评估方法,对融合红外和可见光信息生成的 13 组伪彩色图像的质量进行评估。然后将这些图像与相关参考文献中引用的其他六种方法获得的图像进行比较。实证结果表明,本研究的成果在平均梯度和空间频率方面超过了比较结果。与对照结果相比,只有一两组融合图像在标准偏差和熵方面表现不佳。四组融合图像在 QAB/F 指数方面的表现不如对照组。总之,与其他方法相比,建议方法生成的融合图像在场景细节、视觉感知和图像清晰度方面都表现出色。
Image Fusion Based on Bioinspired Rattlesnake Visual Mechanism Under Lighting Environments of Day and Night Two Levels
This study, grounded in Waxman fusion method, introduces an algorithm for the fusion of visible and infrared images, tailored to a two-level lighting environment, inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells' mechanism. The research presented here is segmented into three components. In the first segment, we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels: day and night. The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images. For the daytime images, where visible light information is predominant, we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel, respectively. Conversely, for nighttime images where infrared information takes precedence, the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel. The outcome is a pseudo-color fused image. The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information. These images are then compared with those obtained by six other methods cited in the relevant reference. The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency. Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results. Four sets of fused images did not perform as well as the comparison in the QAB/F index. In conclusion, the fused images generated through the proposed method show superior performance in terms of scene detail, visual perception, and image sharpness when compared with their counterparts from other methods.
期刊介绍:
The Journal of Bionic Engineering (JBE) is a peer-reviewed journal that publishes original research papers and reviews that apply the knowledge learned from nature and biological systems to solve concrete engineering problems. The topics that JBE covers include but are not limited to:
Mechanisms, kinematical mechanics and control of animal locomotion, development of mobile robots with walking (running and crawling), swimming or flying abilities inspired by animal locomotion.
Structures, morphologies, composition and physical properties of natural and biomaterials; fabrication of new materials mimicking the properties and functions of natural and biomaterials.
Biomedical materials, artificial organs and tissue engineering for medical applications; rehabilitation equipment and devices.
Development of bioinspired computation methods and artificial intelligence for engineering applications.