Sajid Ullah Khan, Meshal Alharbi, Sajid Shah, Mohammed ELAffendi
{"title":"医学图像融合增强多种疾病特征","authors":"Sajid Ullah Khan, Meshal Alharbi, Sajid Shah, Mohammed ELAffendi","doi":"10.1002/ima.23197","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Throughout the past 20 years, medical imaging has found extensive application in clinical diagnosis. Doctors may find it difficult to diagnose diseases using only one imaging modality. The main objective of multimodal medical image fusion (MMIF) is to improve both the accuracy and quality of clinical assessments by extracting structural and spectral information from source images. This study proposes a novel MMIF method to assist doctors and postoperations such as image segmentation, classification, and further surgical procedures. Initially, the intensity-hue-saturation (IHS) model is utilized to decompose the positron emission tomography (PET)/single photon emission computed tomography (SPECT) image, followed by a hue-angle mapping method to discriminate high- and low-activity regions in the PET images. Then, a proposed structure feature adjustment (SFA) mechanism is used as a fusion strategy for high- and low-activity regions to obtain structural and anatomical details with minimum color distortion. In the second step, a new multi-discriminator generative adversarial network (MDcGAN) approach is proposed for obtaining the final fused image. The qualitative and quantitative results demonstrate that the proposed method is superior to existing MMIF methods in preserving the structural, anatomical, and functional details of the PET/SPECT images. Through our assessment, involving visual analysis and subsequent verification using statistical metrics, it becomes evident that color changes contribute substantial visual information to the fusion of PET and MR images. The quantitative outcomes demonstrate that, in the majority of cases, the proposed algorithm consistently outperformed other methods. Yet, in a few instances, it achieved the second-highest results. The validity of the proposed method was confirmed using diverse modalities, encompassing a total of 1012 image pairs.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Medical Image Fusion for Multiple Diseases Features Enhancement\",\"authors\":\"Sajid Ullah Khan, Meshal Alharbi, Sajid Shah, Mohammed ELAffendi\",\"doi\":\"10.1002/ima.23197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Throughout the past 20 years, medical imaging has found extensive application in clinical diagnosis. Doctors may find it difficult to diagnose diseases using only one imaging modality. The main objective of multimodal medical image fusion (MMIF) is to improve both the accuracy and quality of clinical assessments by extracting structural and spectral information from source images. This study proposes a novel MMIF method to assist doctors and postoperations such as image segmentation, classification, and further surgical procedures. Initially, the intensity-hue-saturation (IHS) model is utilized to decompose the positron emission tomography (PET)/single photon emission computed tomography (SPECT) image, followed by a hue-angle mapping method to discriminate high- and low-activity regions in the PET images. Then, a proposed structure feature adjustment (SFA) mechanism is used as a fusion strategy for high- and low-activity regions to obtain structural and anatomical details with minimum color distortion. In the second step, a new multi-discriminator generative adversarial network (MDcGAN) approach is proposed for obtaining the final fused image. The qualitative and quantitative results demonstrate that the proposed method is superior to existing MMIF methods in preserving the structural, anatomical, and functional details of the PET/SPECT images. Through our assessment, involving visual analysis and subsequent verification using statistical metrics, it becomes evident that color changes contribute substantial visual information to the fusion of PET and MR images. The quantitative outcomes demonstrate that, in the majority of cases, the proposed algorithm consistently outperformed other methods. Yet, in a few instances, it achieved the second-highest results. The validity of the proposed method was confirmed using diverse modalities, encompassing a total of 1012 image pairs.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 6\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23197\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23197","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
在过去的 20 年中,医学成像在临床诊断中得到了广泛应用。医生可能会发现,仅使用一种成像模式很难诊断疾病。多模态医学影像融合(MMIF)的主要目的是从源图像中提取结构和光谱信息,从而提高临床评估的准确性和质量。本研究提出了一种新颖的多模态医学影像融合方法,以协助医生进行图像分割、分类和进一步手术等术后工作。首先,利用强度-色调-饱和度(IHS)模型分解正电子发射计算机断层扫描(PET)/单光子发射计算机断层扫描(SPECT)图像,然后利用色调-角度映射方法区分 PET 图像中的高活性和低活性区域。然后,利用提出的结构特征调整(SFA)机制作为高活性和低活性区域的融合策略,以最小的色彩失真获得结构和解剖细节。第二步,提出了一种新的多判别生成对抗网络(MDcGAN)方法,用于获得最终的融合图像。定性和定量结果表明,在保留 PET/SPECT 图像的结构、解剖和功能细节方面,所提出的方法优于现有的 MMIF 方法。通过我们的评估(包括视觉分析和随后的统计指标验证),可以明显看出,颜色变化为 PET 和 MR 图像的融合提供了大量视觉信息。定量结果表明,在大多数情况下,所提出的算法始终优于其他方法。然而,在少数情况下,它取得了第二高的结果。所提方法的有效性通过不同的模式得到了证实,共包含 1012 对图像。
Medical Image Fusion for Multiple Diseases Features Enhancement
Throughout the past 20 years, medical imaging has found extensive application in clinical diagnosis. Doctors may find it difficult to diagnose diseases using only one imaging modality. The main objective of multimodal medical image fusion (MMIF) is to improve both the accuracy and quality of clinical assessments by extracting structural and spectral information from source images. This study proposes a novel MMIF method to assist doctors and postoperations such as image segmentation, classification, and further surgical procedures. Initially, the intensity-hue-saturation (IHS) model is utilized to decompose the positron emission tomography (PET)/single photon emission computed tomography (SPECT) image, followed by a hue-angle mapping method to discriminate high- and low-activity regions in the PET images. Then, a proposed structure feature adjustment (SFA) mechanism is used as a fusion strategy for high- and low-activity regions to obtain structural and anatomical details with minimum color distortion. In the second step, a new multi-discriminator generative adversarial network (MDcGAN) approach is proposed for obtaining the final fused image. The qualitative and quantitative results demonstrate that the proposed method is superior to existing MMIF methods in preserving the structural, anatomical, and functional details of the PET/SPECT images. Through our assessment, involving visual analysis and subsequent verification using statistical metrics, it becomes evident that color changes contribute substantial visual information to the fusion of PET and MR images. The quantitative outcomes demonstrate that, in the majority of cases, the proposed algorithm consistently outperformed other methods. Yet, in a few instances, it achieved the second-highest results. The validity of the proposed method was confirmed using diverse modalities, encompassing a total of 1012 image pairs.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.