Mingjian Li, Younhyun Jung, Shaoli Song, Jinman Kim
{"title":"用于医学容积图像可视化的注意力驱动型视觉强调","authors":"Mingjian Li, Younhyun Jung, Shaoli Song, Jinman Kim","doi":"10.1007/s00371-024-03596-9","DOIUrl":null,"url":null,"abstract":"<p>Direct volume rendering (DVR) is a commonly utilized technique for three-dimensional visualization of volumetric medical images. A key goal of DVR is to enable users to visually emphasize regions of interest (ROIs) which may be occluded by other structures. Conventional methods for ROIs visual emphasis require extensive user involvement for the adjustment of rendering parameters to reduce the occlusion, dependent on the user’s viewing direction. Several works have been proposed to automatically preserve the view of the ROIs by eliminating the occluding structures of lower importance in a view-dependent manner. However, they require pre-segmentation labeling and manual importance assignment on the images. An alternative to ROIs segmentation is to use ‘saliency’ to identify important regions. This however lacks semantic information and thus leads to the inclusion of false positive regions. In this study, we propose an attention-driven visual emphasis method for volumetric medical image visualization. We developed a deep learning attention model, termed as focused-class attention map (F-CAM), trained with only image-wise labels for automated ROIs localization and importance estimation. Our F-CAM transfers the semantic information from the classification task for use in the localization of ROIs, with a focus on small ROIs that characterize medical images. Additionally, we propose an attention compositing module that integrates the generated attention map with transfer function within the DVR pipeline to automate the view-dependent visual emphasis of the ROIs. We demonstrate the superiority of our method compared to existing methods on a multi-modality PET-CT dataset and an MRI dataset.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attention-driven visual emphasis for medical volumetric image visualization\",\"authors\":\"Mingjian Li, Younhyun Jung, Shaoli Song, Jinman Kim\",\"doi\":\"10.1007/s00371-024-03596-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Direct volume rendering (DVR) is a commonly utilized technique for three-dimensional visualization of volumetric medical images. A key goal of DVR is to enable users to visually emphasize regions of interest (ROIs) which may be occluded by other structures. Conventional methods for ROIs visual emphasis require extensive user involvement for the adjustment of rendering parameters to reduce the occlusion, dependent on the user’s viewing direction. Several works have been proposed to automatically preserve the view of the ROIs by eliminating the occluding structures of lower importance in a view-dependent manner. However, they require pre-segmentation labeling and manual importance assignment on the images. An alternative to ROIs segmentation is to use ‘saliency’ to identify important regions. This however lacks semantic information and thus leads to the inclusion of false positive regions. In this study, we propose an attention-driven visual emphasis method for volumetric medical image visualization. We developed a deep learning attention model, termed as focused-class attention map (F-CAM), trained with only image-wise labels for automated ROIs localization and importance estimation. Our F-CAM transfers the semantic information from the classification task for use in the localization of ROIs, with a focus on small ROIs that characterize medical images. Additionally, we propose an attention compositing module that integrates the generated attention map with transfer function within the DVR pipeline to automate the view-dependent visual emphasis of the ROIs. We demonstrate the superiority of our method compared to existing methods on a multi-modality PET-CT dataset and an MRI dataset.</p>\",\"PeriodicalId\":501186,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-024-03596-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03596-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
直接容积渲染(DVR)是一种常用的容积医学图像三维可视化技术。直接容积渲染的一个关键目标是让用户能够直观地强调可能被其他结构遮挡的感兴趣区域(ROI)。传统的 ROI 视觉强调方法需要用户广泛参与,根据用户的观察方向调整渲染参数以减少遮挡。有几种方法可以根据视图自动消除重要性较低的遮挡结构,从而保留 ROI 的视图。不过,这些方法需要对图像进行预分割标记和手动重要度分配。替代 ROI 分割的方法是使用 "显著性 "来识别重要区域。然而,这种方法缺乏语义信息,因此会包含假阳性区域。在本研究中,我们提出了一种用于体积医学图像可视化的注意力驱动视觉强调方法。我们开发了一种深度学习注意力模型,称为 "聚焦类注意力图(F-CAM)",该模型仅使用图像标签进行训练,用于自动 ROI 定位和重要性估计。我们的 F-CAM 将分类任务中的语义信息用于 ROI 的定位,重点关注医疗图像中的小型 ROI。此外,我们还提出了一个注意力合成模块,该模块将生成的注意力地图与 DVR 管道中的转移函数整合在一起,从而自动完成与视图相关的 ROI 视觉强调。我们在多模态 PET-CT 数据集和 MRI 数据集上证明了我们的方法优于现有方法。
Attention-driven visual emphasis for medical volumetric image visualization
Direct volume rendering (DVR) is a commonly utilized technique for three-dimensional visualization of volumetric medical images. A key goal of DVR is to enable users to visually emphasize regions of interest (ROIs) which may be occluded by other structures. Conventional methods for ROIs visual emphasis require extensive user involvement for the adjustment of rendering parameters to reduce the occlusion, dependent on the user’s viewing direction. Several works have been proposed to automatically preserve the view of the ROIs by eliminating the occluding structures of lower importance in a view-dependent manner. However, they require pre-segmentation labeling and manual importance assignment on the images. An alternative to ROIs segmentation is to use ‘saliency’ to identify important regions. This however lacks semantic information and thus leads to the inclusion of false positive regions. In this study, we propose an attention-driven visual emphasis method for volumetric medical image visualization. We developed a deep learning attention model, termed as focused-class attention map (F-CAM), trained with only image-wise labels for automated ROIs localization and importance estimation. Our F-CAM transfers the semantic information from the classification task for use in the localization of ROIs, with a focus on small ROIs that characterize medical images. Additionally, we propose an attention compositing module that integrates the generated attention map with transfer function within the DVR pipeline to automate the view-dependent visual emphasis of the ROIs. We demonstrate the superiority of our method compared to existing methods on a multi-modality PET-CT dataset and an MRI dataset.