ChunXiang Liu, Yuwei Wang, Tianqi Cheng, Xinping Guo, Lei Wang
{"title":"基于NSCT域VGG19模型的多模态医学图像融合","authors":"ChunXiang Liu, Yuwei Wang, Tianqi Cheng, Xinping Guo, Lei Wang","doi":"10.2174/0126662558256721231009045901","DOIUrl":null,"url":null,"abstract":"Aim: To deal with the drawbacks of the traditional medical image fusion methods, such as the low preservation ability of the details, the loss of edge information, and the image distortion, as well as the huge need for the training data for deep learning, a new multi-modal medical image fusion method based on the VGG19 model and the non-subsampled contourlet transform (NSCT) is proposed, whose overall objective is to simultaneously make the full use of the advantages of the NSCT and the VGG19 model. Methodology: Firstly, the source images are decomposed into the high-pass and low-pass subbands by NSCT, respectively. Then, the weighted average fusion rule is implemented to produce the fused low-pass sub-band coefficients, while an extractor based on the pre-trained VGG19 model is constructed to obtain the fused high-pass subband coefficients. Result and Discussion: Finally, the fusion results are reconstructed by the inversion transform of the NSCT on the fused coefficients. To prove the effectiveness and the accuracy, experiments on three types of medical datasets are implemented. Conclusion: By comparing seven famous fusion methods, both of the subjective and objective evaluations demonstrate that the proposed method can effectively avoid the loss of detailed feature information, capture more medical information from the source images, and integrate them into the fused images.","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal Medical Image Fusion based on the VGG19 Model in the NSCT Domain\",\"authors\":\"ChunXiang Liu, Yuwei Wang, Tianqi Cheng, Xinping Guo, Lei Wang\",\"doi\":\"10.2174/0126662558256721231009045901\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aim: To deal with the drawbacks of the traditional medical image fusion methods, such as the low preservation ability of the details, the loss of edge information, and the image distortion, as well as the huge need for the training data for deep learning, a new multi-modal medical image fusion method based on the VGG19 model and the non-subsampled contourlet transform (NSCT) is proposed, whose overall objective is to simultaneously make the full use of the advantages of the NSCT and the VGG19 model. Methodology: Firstly, the source images are decomposed into the high-pass and low-pass subbands by NSCT, respectively. Then, the weighted average fusion rule is implemented to produce the fused low-pass sub-band coefficients, while an extractor based on the pre-trained VGG19 model is constructed to obtain the fused high-pass subband coefficients. Result and Discussion: Finally, the fusion results are reconstructed by the inversion transform of the NSCT on the fused coefficients. To prove the effectiveness and the accuracy, experiments on three types of medical datasets are implemented. Conclusion: By comparing seven famous fusion methods, both of the subjective and objective evaluations demonstrate that the proposed method can effectively avoid the loss of detailed feature information, capture more medical information from the source images, and integrate them into the fused images.\",\"PeriodicalId\":36514,\"journal\":{\"name\":\"Recent Advances in Computer Science and Communications\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Recent Advances in Computer Science and Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2174/0126662558256721231009045901\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recent Advances in Computer Science and Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2174/0126662558256721231009045901","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
Multimodal Medical Image Fusion based on the VGG19 Model in the NSCT Domain
Aim: To deal with the drawbacks of the traditional medical image fusion methods, such as the low preservation ability of the details, the loss of edge information, and the image distortion, as well as the huge need for the training data for deep learning, a new multi-modal medical image fusion method based on the VGG19 model and the non-subsampled contourlet transform (NSCT) is proposed, whose overall objective is to simultaneously make the full use of the advantages of the NSCT and the VGG19 model. Methodology: Firstly, the source images are decomposed into the high-pass and low-pass subbands by NSCT, respectively. Then, the weighted average fusion rule is implemented to produce the fused low-pass sub-band coefficients, while an extractor based on the pre-trained VGG19 model is constructed to obtain the fused high-pass subband coefficients. Result and Discussion: Finally, the fusion results are reconstructed by the inversion transform of the NSCT on the fused coefficients. To prove the effectiveness and the accuracy, experiments on three types of medical datasets are implemented. Conclusion: By comparing seven famous fusion methods, both of the subjective and objective evaluations demonstrate that the proposed method can effectively avoid the loss of detailed feature information, capture more medical information from the source images, and integrate them into the fused images.