Shiqiang Liu , Weisheng Li , Guofen Wang , Yuping Huang , Yin Zhang , Dan He
{"title":"基于深度监督的生成对抗网络,用于解剖和功能图像融合","authors":"Shiqiang Liu , Weisheng Li , Guofen Wang , Yuping Huang , Yin Zhang , Dan He","doi":"10.1016/j.bspc.2024.107011","DOIUrl":null,"url":null,"abstract":"<div><div>Medical image fusion techniques improve single-image representations by integrating salient information from medical images of different modalities. However, existing fusion methods suffer from limitations, such as vanishing gradients, blurred details, and low efficiency. To alleviate these problems, a generative adversarial network based on deep supervision (DSGAN) is proposed. First, a two-branch structure is proposed to separately extract salient information, such as texture and metabolic information, from different modal images. Self-supervised learning is performed by building a new deep supervision module to enhance effective feature extraction. The fusion and multimodal input images are then placed in the discriminator for computation. Adversarial loss based on the Earth Mover’s distance ensures that more spatial frequency, gradient, and contrast information are maintained in a fusion image, and makes model training more stable. In addition, DSGAN is an end-to-end model that does not manually set up complex fusion rules. Compared with classic fusion methods, the proposed DSGAN retains rich texture details and edge information in the input image, fuses images faster, and exhibits superior performance in objective evaluation metrics.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107011"},"PeriodicalIF":4.9000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A generative adversarial network based on deep supervision for anatomical and functional image fusion\",\"authors\":\"Shiqiang Liu , Weisheng Li , Guofen Wang , Yuping Huang , Yin Zhang , Dan He\",\"doi\":\"10.1016/j.bspc.2024.107011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Medical image fusion techniques improve single-image representations by integrating salient information from medical images of different modalities. However, existing fusion methods suffer from limitations, such as vanishing gradients, blurred details, and low efficiency. To alleviate these problems, a generative adversarial network based on deep supervision (DSGAN) is proposed. First, a two-branch structure is proposed to separately extract salient information, such as texture and metabolic information, from different modal images. Self-supervised learning is performed by building a new deep supervision module to enhance effective feature extraction. The fusion and multimodal input images are then placed in the discriminator for computation. Adversarial loss based on the Earth Mover’s distance ensures that more spatial frequency, gradient, and contrast information are maintained in a fusion image, and makes model training more stable. In addition, DSGAN is an end-to-end model that does not manually set up complex fusion rules. Compared with classic fusion methods, the proposed DSGAN retains rich texture details and edge information in the input image, fuses images faster, and exhibits superior performance in objective evaluation metrics.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"100 \",\"pages\":\"Article 107011\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809424010693\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809424010693","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
A generative adversarial network based on deep supervision for anatomical and functional image fusion
Medical image fusion techniques improve single-image representations by integrating salient information from medical images of different modalities. However, existing fusion methods suffer from limitations, such as vanishing gradients, blurred details, and low efficiency. To alleviate these problems, a generative adversarial network based on deep supervision (DSGAN) is proposed. First, a two-branch structure is proposed to separately extract salient information, such as texture and metabolic information, from different modal images. Self-supervised learning is performed by building a new deep supervision module to enhance effective feature extraction. The fusion and multimodal input images are then placed in the discriminator for computation. Adversarial loss based on the Earth Mover’s distance ensures that more spatial frequency, gradient, and contrast information are maintained in a fusion image, and makes model training more stable. In addition, DSGAN is an end-to-end model that does not manually set up complex fusion rules. Compared with classic fusion methods, the proposed DSGAN retains rich texture details and edge information in the input image, fuses images faster, and exhibits superior performance in objective evaluation metrics.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.