{"title":"具有模态代码意识的医学多模态图像转换","authors":"Zhihua Li;Yuxi Jin;Qingneng Li;Zhenxing Huang;Zixiang Chen;Chao Zhou;Na Zhang;Xu Zhang;Wei Fan;Jianmin Yuan;Qiang He;Weiguang Zhang;Dong Liang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3379580","DOIUrl":null,"url":null,"abstract":"In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 5","pages":"511-520"},"PeriodicalIF":4.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Medical Multimodal Image Transformation With Modality Code Awareness\",\"authors\":\"Zhihua Li;Yuxi Jin;Qingneng Li;Zhenxing Huang;Zixiang Chen;Chao Zhou;Na Zhang;Xu Zhang;Wei Fan;Jianmin Yuan;Qiang He;Weiguang Zhang;Dong Liang;Zhanli Hu\",\"doi\":\"10.1109/TRPMS.2024.3379580\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":\"8 5\",\"pages\":\"511-520\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10477255/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10477255/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
摘要
在放射治疗的计划阶段,正电子发射断层扫描(PET)图像经常与计算机断层扫描(CT)和磁共振成像(MRI)结合使用,以准确划定治疗靶区。然而,仅为定位目的而获取额外的 CT 或磁共振(MR)图像不仅经济负担重、耗时长,还可能增加患者的辐射量。为了缓解这些问题,我们引入了一种具有动态模式转换能力的深度学习模型。这种方法是通过在解码器模块中加入自适应模态转换层来实现的。自适应模态转换层通过重塑编码器使用开关代码提取的特征的数据分布,有效控制模态转换。利用峰值信噪比、结构相似性指数测量和归一化均方误差等评估指标,对带有参考图像的图像进行模型性能评估。对于无参考图像的结果,则由六位核医学医生根据临床解释进行主观评估。所提出的模型在将非衰减校正 PET 图像转换为用户指定模式(衰减校正 PET、MR 或 CT)方面表现出色,有效简化了放射治疗场景中补充模式图像的获取。
Medical Multimodal Image Transformation With Modality Code Awareness
In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.