首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Weakly supervised detection of pheochromocytomas and paragangliomas in CT using noisy data 利用噪声数据对 CT 中的嗜铬细胞瘤和副神经节瘤进行弱监督检测。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-20 DOI: 10.1016/j.compmedimag.2024.102419
David Oluigbo , Tejas Sudharshan Mathai , Bikash Santra , Pritam Mukherjee , Jianfei Liu , Abhishek Jha , Mayank Patel , Karel Pacak , Ronald M. Summers

Pheochromocytomas and Paragangliomas (PPGLs) are rare adrenal and extra-adrenal tumors that have metastatic potential. Management of patients with PPGLs mainly depends on the makeup of their genetic cluster: SDHx, VHL/EPAS1, kinase, and sporadic. CT is the preferred modality for precise localization of PPGLs, such that their metastatic progression can be assessed. However, the variable size, morphology, and appearance of these tumors in different anatomical regions can pose challenges for radiologists. Since radiologists must routinely track changes across patient visits, manual annotation of PPGLs is quite time-consuming and cumbersome to do across all axial slices in a CT volume. As such, PPGLs are only weakly annotated on axial slices by radiologists in the form of RECIST measurements. To ameliorate the manual effort spent by radiologists, we propose a method for the automated detection of PPGLs in CT via a proxy segmentation task. Weak 3D annotations (derived from 2D bounding boxes) were used to train both 2D and 3D nnUNet models to detect PPGLs via segmentation. We evaluated our approaches on an in-house dataset comprised of chest-abdomen-pelvis CTs of 255 patients with confirmed PPGLs. On a test set of 53 CT volumes, our 3D nnUNet model achieved a detection precision of 70% and sensitivity of 64.1%, and outperformed the 2D model that obtained a precision of 52.7% and sensitivity of 27.5% (p < 0.05). SDHx and sporadic genetic clusters achieved the highest precisions of 73.1% and 72.7% respectively. Our state-of-the art findings highlight the promising nature of the challenging task of automated PPGL detection.

嗜铬细胞瘤和副神经节瘤(PPGLs)是罕见的肾上腺和肾上腺外肿瘤,具有转移潜力。对PPGLs患者的治疗主要取决于其基因簇的构成:SDHx、VHL/EPAS1、激酶和散发性。CT 是对 PPGLs 进行精确定位的首选方式,以便对其转移进展进行评估。然而,这些肿瘤在不同解剖区域的大小、形态和外观各不相同,这给放射科医生带来了挑战。由于放射科医生必须定期跟踪患者就诊时的变化,因此在 CT 卷的所有轴切片上手动标注 PPGL 相当耗时且繁琐。因此,放射科医生只能以 RECIST 测量的形式在轴切片上对 PPGL 进行微弱的注释。为了减轻放射科医生的人工工作量,我们提出了一种通过代理分割任务自动检测 CT 中 PPGL 的方法。弱三维注释(源自二维边界框)被用于训练二维和三维 nnUNet 模型,以通过分割检测 PPGL。我们在一个内部数据集上评估了我们的方法,该数据集由 255 名确诊 PPGL 患者的胸部-腹部-骨盆 CT 组成。在一个包含 53 张 CT 卷的测试集中,我们的 3D nnUNet 模型的检测精度达到 70%,灵敏度达到 64.1%,优于二维模型,后者的检测精度为 52.7%,灵敏度为 27.5%(p< 0.05)。SDHx和散发性基因簇的精确度最高,分别达到73.1%和72.7%。我们的最新研究结果凸显了 PPGL 自动检测这一具有挑战性任务的前景。
{"title":"Weakly supervised detection of pheochromocytomas and paragangliomas in CT using noisy data","authors":"David Oluigbo ,&nbsp;Tejas Sudharshan Mathai ,&nbsp;Bikash Santra ,&nbsp;Pritam Mukherjee ,&nbsp;Jianfei Liu ,&nbsp;Abhishek Jha ,&nbsp;Mayank Patel ,&nbsp;Karel Pacak ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2024.102419","DOIUrl":"10.1016/j.compmedimag.2024.102419","url":null,"abstract":"<div><p>Pheochromocytomas and Paragangliomas (PPGLs) are rare adrenal and extra-adrenal tumors that have metastatic potential. Management of patients with PPGLs mainly depends on the makeup of their genetic cluster: SDHx, VHL/EPAS1, kinase, and sporadic. CT is the preferred modality for precise localization of PPGLs, such that their metastatic progression can be assessed. However, the variable size, morphology, and appearance of these tumors in different anatomical regions can pose challenges for radiologists. Since radiologists must routinely track changes across patient visits, manual annotation of PPGLs is quite time-consuming and cumbersome to do across all axial slices in a CT volume. As such, PPGLs are only weakly annotated on axial slices by radiologists in the form of RECIST measurements. To ameliorate the manual effort spent by radiologists, we propose a method for the automated detection of PPGLs in CT via a proxy segmentation task. Weak 3D annotations (derived from 2D bounding boxes) were used to train both 2D and 3D nnUNet models to detect PPGLs via segmentation. We evaluated our approaches on an in-house dataset comprised of chest-abdomen-pelvis CTs of 255 patients with confirmed PPGLs. On a test set of 53 CT volumes, our 3D nnUNet model achieved a detection precision of 70% and sensitivity of 64.1%, and outperformed the 2D model that obtained a precision of 52.7% and sensitivity of 27.5% (<em>p</em> <span><math><mo>&lt;</mo></math></span> 0.05). SDHx and sporadic genetic clusters achieved the highest precisions of 73.1% and 72.7% respectively. Our state-of-the art findings highlight the promising nature of the challenging task of automated PPGL detection.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102419"},"PeriodicalIF":5.4,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective Bayesian optimization with enhanced features for adaptively improved glioblastoma partitioning and survival prediction 利用增强特征的多目标贝叶斯优化技术,自适应改进胶质母细胞瘤的分区和生存预测
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-19 DOI: 10.1016/j.compmedimag.2024.102420
Yifan Li , Chao Li , Yiran Wei , Stephen Price , Carola-Bibiane Schönlieb , Xi Chen

Glioblastoma, an aggressive brain tumor prevalent in adults, exhibits heterogeneity in its microstructures and vascular patterns. The delineation of its subregions could facilitate the development of region-targeted therapies. However, current unsupervised learning techniques for this task face challenges in reliability due to fluctuations of clustering algorithms, particularly when processing data from diverse patient cohorts. Furthermore, stable clustering results do not guarantee clinical meaningfulness. To establish the clinical relevance of these subregions, we will perform survival predictions using radiomic features extracted from them. Following this, achieving a balance between outcome stability and clinical relevance presents a significant challenge, further exacerbated by the extensive time required for hyper-parameter tuning.

In this study, we introduce a multi-objective Bayesian optimization (MOBO) framework, which leverages a Feature-enhanced Auto-Encoder (FAE) and customized losses to assess both the reproducibility of clustering algorithms and the clinical relevance of their outcomes. Specifically, we embed the entirety of these processes within the MOBO framework, modeling both using distinct Gaussian Processes (GPs). The proposed MOBO framework can automatically balance the trade-off between the two criteria by employing bespoke stability and clinical significance losses. Our approach efficiently optimizes all hyper-parameters, including the FAE architecture and clustering parameters, within a few steps. This not only accelerates the process but also consistently yields robust MRI subregion delineations and provides survival predictions with strong statistical validation.

胶质母细胞瘤是一种好发于成人的侵袭性脑肿瘤,其微观结构和血管形态具有异质性。对其亚区域的划分有助于开发区域靶向疗法。然而,由于聚类算法的波动,特别是在处理来自不同患者队列的数据时,目前用于这项任务的无监督学习技术在可靠性方面面临挑战。此外,稳定的聚类结果并不能保证具有临床意义。为了确定这些亚区的临床意义,我们将利用从中提取的放射学特征进行生存预测。在本研究中,我们引入了多目标贝叶斯优化(MOBO)框架,该框架利用特征增强自动编码器(FAE)和定制损失来评估聚类算法的可重复性及其结果的临床相关性。具体来说,我们将这些过程的全部内容嵌入 MOBO 框架,使用不同的高斯过程 (GP) 对两者进行建模。通过采用定制的稳定性和临床意义损失,拟议的 MOBO 框架可以自动平衡这两个标准之间的权衡。我们的方法可在几个步骤内有效优化所有超参数,包括 FAE 架构和聚类参数。这不仅加快了过程,还能持续产生稳健的 MRI 子区域划分,并提供具有强大统计验证的生存预测。
{"title":"Multi-objective Bayesian optimization with enhanced features for adaptively improved glioblastoma partitioning and survival prediction","authors":"Yifan Li ,&nbsp;Chao Li ,&nbsp;Yiran Wei ,&nbsp;Stephen Price ,&nbsp;Carola-Bibiane Schönlieb ,&nbsp;Xi Chen","doi":"10.1016/j.compmedimag.2024.102420","DOIUrl":"10.1016/j.compmedimag.2024.102420","url":null,"abstract":"<div><p>Glioblastoma, an aggressive brain tumor prevalent in adults, exhibits heterogeneity in its microstructures and vascular patterns. The delineation of its subregions could facilitate the development of region-targeted therapies. However, current unsupervised learning techniques for this task face challenges in reliability due to fluctuations of clustering algorithms, particularly when processing data from diverse patient cohorts. Furthermore, stable clustering results do not guarantee clinical meaningfulness. To establish the clinical relevance of these subregions, we will perform survival predictions using radiomic features extracted from them. Following this, achieving a balance between outcome stability and clinical relevance presents a significant challenge, further exacerbated by the extensive time required for hyper-parameter tuning.</p><p>In this study, we introduce a multi-objective Bayesian optimization (MOBO) framework, which leverages a Feature-enhanced Auto-Encoder (FAE) and customized losses to assess both the reproducibility of clustering algorithms and the clinical relevance of their outcomes. Specifically, we embed the entirety of these processes within the MOBO framework, modeling both using distinct Gaussian Processes (GPs). The proposed MOBO framework can automatically balance the trade-off between the two criteria by employing bespoke stability and clinical significance losses. Our approach efficiently optimizes all hyper-parameters, including the FAE architecture and clustering parameters, within a few steps. This not only accelerates the process but also consistently yields robust MRI subregion delineations and provides survival predictions with strong statistical validation.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102420"},"PeriodicalIF":5.4,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation 利用对术中气胸变形的自监督离线学习,实现内窥镜相机图像的二维/三维可变形配准
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-19 DOI: 10.1016/j.compmedimag.2024.102418
Tomoki Oya , Yuka Kadomatsu , Toyofumi Fengshi Chen-Yoshikawa , Megumi Nakao

Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.

将患者特定器官形状与内窥镜相机图像进行形状配准有望成为实现图像引导手术的关键,机器学习方法的各种应用已被考虑。由于从临床病例中获得的训练数据数量有限,人们尝试使用由统计变形模型生成的合成图像,但合成图像与真实场景之间的差异对估计的影响是一个问题。在本研究中,我们提出了一种自监督离线学习框架,利用从合成图像和真实摄像机图像中获得的图像特征,进行基于模型的配准。由于可用于训练的内窥镜图像数量有限,我们使用了由非线性变形模型生成的合成图像,该模型代表了术中可能出现的气胸变形。为了解决从合成图像和真实图像获得的共同图像特征中估计变形形状和视点的困难,我们尝试通过添加阴影和距离信息来改善配准误差,这些信息可以作为先验知识在合成图像中获得。通过学习预测两幅合成图像之间的差分模型参数的任务,实现与真实相机图像的形状配准。在胸腔镜肺癌切除术中,所开发的框架达到了平均绝对误差小于 10 毫米、平均距离小于 5 毫米的配准精度,证实了与传统方法相比预测精度的提高。
{"title":"2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation","authors":"Tomoki Oya ,&nbsp;Yuka Kadomatsu ,&nbsp;Toyofumi Fengshi Chen-Yoshikawa ,&nbsp;Megumi Nakao","doi":"10.1016/j.compmedimag.2024.102418","DOIUrl":"10.1016/j.compmedimag.2024.102418","url":null,"abstract":"<div><p>Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102418"},"PeriodicalIF":5.4,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000958/pdfft?md5=3066bd94344d2f3879bdc4b7435a2810&pid=1-s2.0-S0895611124000958-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient multi-stage feedback attention for diverse lesion in cancer image segmentation 针对癌症图像分割中的不同病灶的高效多级反馈关注
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-14 DOI: 10.1016/j.compmedimag.2024.102417
Dewa Made Sri Arsa , Talha Ilyas , Seok-Hwan Park , Leon Chua , Hyongsuk Kim

In the domain of Computer-Aided Diagnosis (CAD) systems, the accurate identification of cancer lesions is paramount, given the life-threatening nature of cancer and the complexities inherent in its manifestation. This task is particularly arduous due to the often vague boundaries of cancerous regions, compounded by the presence of noise and the heterogeneity in the appearance of lesions, making precise segmentation a critical yet challenging endeavor. This study introduces an innovative, an iterative feedback mechanism tailored for the nuanced detection of cancer lesions in a variety of medical imaging modalities, offering a refining phase to adjust detection results. The core of our approach is the elimination of the need for an initial segmentation mask, a common limitation in iterative-based segmentation methods. Instead, we utilize a novel system where the feedback for refining segmentation is derived directly from the encoder–decoder architecture of our neural network model. This shift allows for more dynamic and accurate lesion identification. To further enhance the accuracy of our CAD system, we employ a multi-scale feedback attention mechanism to guide and refine predicted mask subsequent iterations. In parallel, we introduce a sophisticated weighted feedback loss function. This function synergistically combines global and iteration-specific loss considerations, thereby refining parameter estimation and improving the overall precision of the segmentation. We conducted comprehensive experiments across three distinct categories of medical imaging: colonoscopy, ultrasonography, and dermoscopic images. The experimental results demonstrate that our method not only competes favorably with but also surpasses current state-of-the-art methods in various scenarios, including both standard and challenging out-of-domain tasks. This evidences the robustness and versatility of our approach in accurately identifying cancer lesions across a spectrum of medical imaging contexts. Our source code can be found at https://github.com/dewamsa/EfficientFeedbackNetwork.

在计算机辅助诊断(CAD)系统领域,准确识别癌症病灶至关重要,因为癌症危及生命,而且其表现形式错综复杂。由于癌症区域的边界往往模糊不清,再加上噪声的存在和病变外观的异质性,这项任务尤为艰巨,因此精确分割是一项关键而又具有挑战性的工作。本研究引入了一种创新的迭代反馈机制,专为各种医学成像模式中癌症病灶的细微检测而定制,提供了一个调整检测结果的完善阶段。我们方法的核心是消除了对初始分割掩膜的需求,这是基于迭代的分割方法的常见限制。取而代之的是,我们利用一种新颖的系统,从神经网络模型的编码器-解码器架构中直接获得细化分割的反馈。这种转变使病变识别更加动态和准确。为了进一步提高 CAD 系统的准确性,我们采用了多尺度反馈关注机制来指导和完善预测掩膜的后续迭代。与此同时,我们还引入了复杂的加权反馈损失函数。该函数将全局损失和特定迭代损失考虑因素协同结合,从而完善了参数估计,提高了分割的整体精度。我们在结肠镜检查、超声波检查和皮肤镜图像这三类不同的医学影像中进行了综合实验。实验结果表明,我们的方法在各种情况下(包括标准任务和具有挑战性的域外任务)不仅能与目前最先进的方法竞争,而且还能超越它们。这证明了我们的方法在各种医学成像环境中准确识别癌症病变方面的稳健性和多功能性。我们的源代码见 https://github.com/dewamsa/EfficientFeedbackNetwork。
{"title":"Efficient multi-stage feedback attention for diverse lesion in cancer image segmentation","authors":"Dewa Made Sri Arsa ,&nbsp;Talha Ilyas ,&nbsp;Seok-Hwan Park ,&nbsp;Leon Chua ,&nbsp;Hyongsuk Kim","doi":"10.1016/j.compmedimag.2024.102417","DOIUrl":"10.1016/j.compmedimag.2024.102417","url":null,"abstract":"<div><p>In the domain of Computer-Aided Diagnosis (CAD) systems, the accurate identification of cancer lesions is paramount, given the life-threatening nature of cancer and the complexities inherent in its manifestation. This task is particularly arduous due to the often vague boundaries of cancerous regions, compounded by the presence of noise and the heterogeneity in the appearance of lesions, making precise segmentation a critical yet challenging endeavor. This study introduces an innovative, an iterative feedback mechanism tailored for the nuanced detection of cancer lesions in a variety of medical imaging modalities, offering a refining phase to adjust detection results. The core of our approach is the elimination of the need for an initial segmentation mask, a common limitation in iterative-based segmentation methods. Instead, we utilize a novel system where the feedback for refining segmentation is derived directly from the encoder–decoder architecture of our neural network model. This shift allows for more dynamic and accurate lesion identification. To further enhance the accuracy of our CAD system, we employ a multi-scale feedback attention mechanism to guide and refine predicted mask subsequent iterations. In parallel, we introduce a sophisticated weighted feedback loss function. This function synergistically combines global and iteration-specific loss considerations, thereby refining parameter estimation and improving the overall precision of the segmentation. We conducted comprehensive experiments across three distinct categories of medical imaging: colonoscopy, ultrasonography, and dermoscopic images. The experimental results demonstrate that our method not only competes favorably with but also surpasses current state-of-the-art methods in various scenarios, including both standard and challenging out-of-domain tasks. This evidences the robustness and versatility of our approach in accurately identifying cancer lesions across a spectrum of medical imaging contexts. Our source code can be found at <span><span>https://github.com/dewamsa/EfficientFeedbackNetwork</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102417"},"PeriodicalIF":5.4,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization ScribSD+:基于同步多尺度知识提炼和类别对比正则化的 Scribble 监督医学图像分割
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-09 DOI: 10.1016/j.compmedimag.2024.102416
Yijie Qu , Tao Lu , Shaoting Zhang , Guotai Wang

Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.

尽管深度学习在自动医学图像分割方面取得了最先进的性能,但它往往需要大量像素级的人工注释来进行训练。获取这些高质量注释不仅耗时,而且需要专业知识,这就阻碍了依赖这些注释来训练具有良好分割性能的模型的广泛应用。使用涂鸦注释可以大大降低注释成本,但由于监督不足,往往会导致分割性能不佳。在这项工作中,我们提出了一个名为 ScribSD+ 的新框架,该框架基于多尺度知识提炼和分类对比正则化,用于从涂鸦注释中学习。对于由涂鸦和基于指数移动平均(EMA)的教师监督的学生网络,我们首先引入了多尺度预测级知识蒸馏(KD),利用教师网络的软预测在多个尺度上监督学生,然后提出了类对比正则化,鼓励同类内的特征相似性和不同类间的特征相似性,从而有效提高了学生网络的分割性能。在 ACDC 数据集(用于心脏结构分割)和胎儿 MRI 数据集(用于胎盘和胎儿大脑分割)上的实验结果表明,我们的方法显著提高了学生网络的性能,并优于五种最先进的涂鸦监督学习方法。因此,在开发用于临床诊断的深度学习模型时,该方法有望降低标注成本。
{"title":"ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization","authors":"Yijie Qu ,&nbsp;Tao Lu ,&nbsp;Shaoting Zhang ,&nbsp;Guotai Wang","doi":"10.1016/j.compmedimag.2024.102416","DOIUrl":"10.1016/j.compmedimag.2024.102416","url":null,"abstract":"<div><p>Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102416"},"PeriodicalIF":5.4,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions 评估浸润性乳腺癌淋巴管侵犯的综合方法:利用多模态磁共振成像结果、放射组学和深度学习分析瘤内和瘤周区域
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-08 DOI: 10.1016/j.compmedimag.2024.102415
Wen Liu , Li Li , Jiao Deng , Wei Li

Purpose

To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.

Methods

This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.

Results

The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).

Conclusion

The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.

目的通过比较基于术前多模态磁共振成像(MRI)的放射组学模型和深度学习(DL)模型的诊断性能,评估乳腺癌的淋巴管侵犯(LVI)。方法这项回顾性研究纳入了262例乳腺癌患者--其中183例为训练队列(144例LVI阴性,39例LVI阳性),79例为验证队列(59例LVI阴性,20例LVI阳性)。利用多模态磁共振成像从乳腺瘤内和瘤周区域提取放射组学特征,生成肿瘤总体积(GTV)_放射组学和肿瘤总体积加瘤周体积(GPTV)_放射组学。随后,根据 GTV 和 GPTV 建立 DL 模型(GTV_DL 和 GPTV_DL),以确定 LVI 状态。结果 GPTV_DL 的诊断效率优于 GTV_DL(曲线下面积 [AUC],分别为 0.771 和 0.720)。同样,GPTV_放射组学也优于 GTV_放射组学(AUC 分别为 0.685 和 0.636)。单变量和多变量逻辑回归分析显示,MRI-腋窝淋巴结和瘤周水肿等成像结果之间存在关联(AUC,0.665)。结论 GPTV 导出的放射组学模型和 DL 模型的诊断效率超过了 GTV 导出的模型。此外,融合了 GPTV_DL、GPTV_放射组学和成像结果的混合模型证明了在乳腺癌患者手术前确定 LVI 状态的有效性。
{"title":"A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions","authors":"Wen Liu ,&nbsp;Li Li ,&nbsp;Jiao Deng ,&nbsp;Wei Li","doi":"10.1016/j.compmedimag.2024.102415","DOIUrl":"10.1016/j.compmedimag.2024.102415","url":null,"abstract":"<div><h3>Purpose</h3><p>To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.</p></div><div><h3>Methods</h3><p>This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.</p></div><div><h3>Results</h3><p>The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).</p></div><div><h3>Conclusion</h3><p>The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102415"},"PeriodicalIF":5.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework 利用双流多层次融合框架从多模态非对比图像中准确分割肝脏肿瘤。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-03 DOI: 10.1016/j.compmedimag.2024.102414
Chenchu Xu , Xue Wu , Boyan Wang , Jie Chen , Zhifan Gao , Xiujian Liu , Heye Zhang

The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.

使用多模态非对比图像(即 T1FS、T2FS 和 DWI)分割肝脏肿瘤提供了一种解决方案,无需使用造影剂,对临床诊断至关重要。然而,由于模态间的干扰,要发现最有用的信息来融合多模态图像以进行准确分割仍是一项具有挑战性的任务。在本文中,我们提出了一种双流多层次融合框架(DM-FF),首次直接从非对比度多模态图像中准确分割肝脏肿瘤。我们的 DM-FF 首先设计了一个基于注意力的编码器-解码器,以有效提取与每种模态的指定表示相对应的多级特征图。然后,DM-FF 创建了两类融合模块,其中一个模块融合所学特征,以获得多模态图像的共享表征,从而利用共性提高性能;另一个模块融合片段的决策证据,以发现模态之间的差异,从而防止模态冲突造成的干扰。通过整合这三个组件,DM-FF 可使多模态非对比图像相互配合,实现准确的分割。通过对两台核磁共振扫描仪扫描的250名不同类型肿瘤患者进行评估,DM-FF的Dice值达到81.20%,与八种最先进的分割架构相比,DM-FF提高了性能(Dice值至少提高了11%)。结果表明,我们的 DM-FF 极大地促进了非对比度肝脏肿瘤技术的开发和应用。
{"title":"Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework","authors":"Chenchu Xu ,&nbsp;Xue Wu ,&nbsp;Boyan Wang ,&nbsp;Jie Chen ,&nbsp;Zhifan Gao ,&nbsp;Xiujian Liu ,&nbsp;Heye Zhang","doi":"10.1016/j.compmedimag.2024.102414","DOIUrl":"10.1016/j.compmedimag.2024.102414","url":null,"abstract":"<div><p>The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102414"},"PeriodicalIF":5.4,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease 基于放射线组学预测转移性疾病的病灶特异性全身治疗反应。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-25 DOI: 10.1016/j.compmedimag.2024.102413
Caryn Geady , Farnoosh Abbas-Aghababazadeh , Andres Kohan , Scott Schuetze , David Shultz , Benjamin Haibe-Kains

Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.

尽管组织学分类相同,但多发性转移瘤患者的单个肿瘤可能具有不同的特征,对抗癌疗法的敏感性也各不相同。在这项研究中,我们研究了放射生物标志物在预测多发性转移性骨髓瘤患者病灶特异性治疗耐药性方面的效用。我们利用来自80名患者的202个肺转移灶(LM)数据集和1648个治疗前计算机断层扫描(CT)放射组学特征以及随访CT确定的LM进展情况,建立了一个放射组学模型来预测每个病灶的进展情况。重复实验评估了不同 LM 体积组的相对预测性能。与无技能分类器相比,病灶特异性放射组学模型显示预测能力最多可提高 4.5 倍,最精确模型的精确度-召回曲线下面积为 0.70(FDR = 0.05)。精确度因施用药物和 LM 容量而异。在体积相关系数阈值为 0.20 时,通过移除放射体特征来控制 LM 体积的影响。利用放射学特征预测病灶特异性反应是一种评估治疗反应的新策略,它承认转移性亚克隆内的生物多样性,这有助于在全身治疗中选择性消融耐药克隆的管理策略。
{"title":"Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease","authors":"Caryn Geady ,&nbsp;Farnoosh Abbas-Aghababazadeh ,&nbsp;Andres Kohan ,&nbsp;Scott Schuetze ,&nbsp;David Shultz ,&nbsp;Benjamin Haibe-Kains","doi":"10.1016/j.compmedimag.2024.102413","DOIUrl":"10.1016/j.compmedimag.2024.102413","url":null,"abstract":"<div><p>Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102413"},"PeriodicalIF":5.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation 用于骨盆骨折自动分割的片段距离引导双流学习。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-19 DOI: 10.1016/j.compmedimag.2024.102412
Bolun Zeng , Huixiang Wang , Leo Joskowicz , Xiaojun Chen

Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935±0.068 and 0.929±0.058 in the dataset FracCLINIC, and 0.955±0.072 and 0.912±0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.

骨盆骨折是一种复杂而严重的损伤。准确的诊断和治疗计划需要通过术前 CT 扫描对骨盆结构和骨折碎片进行分割。然而,这种分割是一项具有挑战性的任务,因为骨盆骨折的碎片通常在形态、位置和数量上表现出相当大的可变性和不规则性。在本研究中,我们提出了一种新颖的双流学习框架,用于骨盆骨折的自动分割和类别标记。我们的方法采用双分支架构,利用骨碎片的距离学习,独特地识别出不同数量和位置的骨盆骨折碎片。此外,我们还开发了多尺寸特征融合模块,该模块可根据不同尺寸和形状的目标,自适应地聚合来自不同感受野的特征,从而提高分割性能。在来自不同医疗中心的三个骨盆骨折数据集上进行的广泛实验证明了所提方法的准确性和通用性。在数据集 FracCLINIC 中,该方法的平均骰子系数(Dice coefficient)和平均灵敏度(Sensitivity)分别为 0.935±0.068 和 0.929±0.058;在数据集 FracSegData 中,该方法的平均骰子系数(Dice coefficient)和平均灵敏度(Sensitivity)分别为 0.955±0.072 和 0.912±0.125,均优于其他比较方法。我们的方法优化了骨盆骨折的分割过程,有可能成为骨盆骨折临床治疗中术前规划的有效工具。
{"title":"Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation","authors":"Bolun Zeng ,&nbsp;Huixiang Wang ,&nbsp;Leo Joskowicz ,&nbsp;Xiaojun Chen","doi":"10.1016/j.compmedimag.2024.102412","DOIUrl":"10.1016/j.compmedimag.2024.102412","url":null,"abstract":"<div><p>Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935<span><math><mo>±</mo></math></span>0.068 and 0.929<span><math><mo>±</mo></math></span>0.058 in the dataset FracCLINIC, and 0.955<span><math><mo>±</mo></math></span>0.072 and 0.912<span><math><mo>±</mo></math></span>0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102412"},"PeriodicalIF":5.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision dose prediction for breast cancer patients undergoing IMRT: The Swin-UMamba-Channel Model 对接受 IMRT 的乳腺癌患者进行精确剂量预测:Swin-Umamba-Channel 模型
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-13 DOI: 10.1016/j.compmedimag.2024.102409
Hui Xie , Hua Zhang , Zijie Chen , Tao Tan
<div><h3>Background</h3><p>Radiation therapy is one of the crucial treatment modalities for cancer. An excellent radiation therapy plan relies heavily on an outstanding dose distribution map, which is traditionally generated through repeated trials and adjustments by experienced physicists. However, this process is both time-consuming and labor-intensive, and it comes with a degree of subjectivity. Now, with the powerful capabilities of deep learning, we are able to predict dose distribution maps more accurately, effectively overcoming these challenges.</p></div><div><h3>Methods</h3><p>In this study, we propose a novel Swin-UMamba-Channel prediction model specifically designed for predicting the dose distribution of patients with left breast cancer undergoing radiotherapy after total mastectomy. This model integrates anatomical position information of organs and ray angle information, significantly enhancing prediction accuracy. Through iterative training of the generator (Swin-UMamba) and discriminator, the model can generate images that closely match the actual dose, assisting physicists in quickly creating DVH curves and shortening the treatment planning cycle. Our model exhibits excellent performance in terms of prediction accuracy, computational efficiency, and practicality, and its effectiveness has been further verified through comparative experiments with similar networks.</p></div><div><h3>Results</h3><p>The results of the study indicate that our model can accurately predict the clinical dose of breast cancer patients undergoing intensity-modulated radiation therapy (IMRT). The predicted dose range is from 0 to 50 Gy, and compared with actual data, it shows a high accuracy with an average Dice similarity coefficient of 0.86. Specifically, the average dose change rate for the planning target volume ranges from 0.28 % to 1.515 %, while the average dose change rates for the right and left lungs are 2.113 % and 0.508 %, respectively. Notably, due to their small sizes, the heart and spinal cord exhibit relatively higher average dose change rates, reaching 3.208 % and 1.490 %, respectively. In comparison with similar dose studies, our model demonstrates superior performance. Additionally, our model possesses fewer parameters, lower computational complexity, and shorter processing time, further enhancing its practicality and efficiency. These findings provide strong evidence for the accuracy and reliability of our model in predicting doses, offering significant technical support for IMRT in breast cancer patients.</p></div><div><h3>Conclusion</h3><p>This study presents a novel Swin-UMamba-Channel dose prediction model, and its results demonstrate its precise prediction of clinical doses for the target area of left breast cancer patients undergoing total mastectomy and IMRT. These remarkable achievements provide valuable reference data for subsequent plan optimization and quality control, paving a new path for the application of deep learning in
背景放射治疗是治疗癌症的重要方法之一。优秀的放射治疗计划在很大程度上依赖于出色的剂量分布图,而传统的剂量分布图是由经验丰富的物理学家通过反复试验和调整生成的。然而,这一过程既耗时又耗力,还带有一定的主观性。现在,借助深度学习的强大功能,我们能够更准确地预测剂量分布图,有效克服这些挑战。方法在这项研究中,我们提出了一种新型 Swin-UMamba-Channel 预测模型,专门用于预测全乳房切除术后接受放疗的左乳腺癌患者的剂量分布。该模型整合了器官解剖位置信息和射线角度信息,大大提高了预测精度。通过对生成器(Swin-UMamba)和判别器的迭代训练,该模型可以生成与实际剂量非常接近的图像,从而帮助物理学家快速创建 DVH 曲线并缩短治疗计划周期。我们的模型在预测精度、计算效率和实用性方面都表现出色,其有效性通过与类似网络的对比实验得到了进一步验证。结果研究结果表明,我们的模型可以准确预测接受强度调制放射治疗(IMRT)的乳腺癌患者的临床剂量。预测的剂量范围为 0 至 50 Gy,与实际数据相比,它显示出较高的准确性,平均骰子相似系数为 0.86。具体来说,规划靶体积的平均剂量变化率在 0.28 % 到 1.515 % 之间,而左右肺的平均剂量变化率分别为 2.113 % 和 0.508 %。值得注意的是,心脏和脊髓由于体积较小,平均剂量变化率相对较高,分别达到 3.208 % 和 1.490 %。与类似的剂量研究相比,我们的模型表现出更优越的性能。此外,我们的模型参数少、计算复杂度低、处理时间短,进一步提高了实用性和效率。这些研究结果有力地证明了我们的模型在预测剂量方面的准确性和可靠性,为乳腺癌患者的 IMRT 提供了重要的技术支持。结论本研究提出了一种新型 Swin-UMamba-Channel 剂量预测模型,其结果表明它能精确预测接受全乳房切除术和 IMRT 的左侧乳腺癌患者靶区的临床剂量。这些卓越的成就为后续的计划优化和质量控制提供了宝贵的参考数据,为深度学习在放射治疗领域的应用铺平了一条新的道路。
{"title":"Precision dose prediction for breast cancer patients undergoing IMRT: The Swin-UMamba-Channel Model","authors":"Hui Xie ,&nbsp;Hua Zhang ,&nbsp;Zijie Chen ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2024.102409","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102409","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background&lt;/h3&gt;&lt;p&gt;Radiation therapy is one of the crucial treatment modalities for cancer. An excellent radiation therapy plan relies heavily on an outstanding dose distribution map, which is traditionally generated through repeated trials and adjustments by experienced physicists. However, this process is both time-consuming and labor-intensive, and it comes with a degree of subjectivity. Now, with the powerful capabilities of deep learning, we are able to predict dose distribution maps more accurately, effectively overcoming these challenges.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;p&gt;In this study, we propose a novel Swin-UMamba-Channel prediction model specifically designed for predicting the dose distribution of patients with left breast cancer undergoing radiotherapy after total mastectomy. This model integrates anatomical position information of organs and ray angle information, significantly enhancing prediction accuracy. Through iterative training of the generator (Swin-UMamba) and discriminator, the model can generate images that closely match the actual dose, assisting physicists in quickly creating DVH curves and shortening the treatment planning cycle. Our model exhibits excellent performance in terms of prediction accuracy, computational efficiency, and practicality, and its effectiveness has been further verified through comparative experiments with similar networks.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;p&gt;The results of the study indicate that our model can accurately predict the clinical dose of breast cancer patients undergoing intensity-modulated radiation therapy (IMRT). The predicted dose range is from 0 to 50 Gy, and compared with actual data, it shows a high accuracy with an average Dice similarity coefficient of 0.86. Specifically, the average dose change rate for the planning target volume ranges from 0.28 % to 1.515 %, while the average dose change rates for the right and left lungs are 2.113 % and 0.508 %, respectively. Notably, due to their small sizes, the heart and spinal cord exhibit relatively higher average dose change rates, reaching 3.208 % and 1.490 %, respectively. In comparison with similar dose studies, our model demonstrates superior performance. Additionally, our model possesses fewer parameters, lower computational complexity, and shorter processing time, further enhancing its practicality and efficiency. These findings provide strong evidence for the accuracy and reliability of our model in predicting doses, offering significant technical support for IMRT in breast cancer patients.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;p&gt;This study presents a novel Swin-UMamba-Channel dose prediction model, and its results demonstrate its precise prediction of clinical doses for the target area of left breast cancer patients undergoing total mastectomy and IMRT. These remarkable achievements provide valuable reference data for subsequent plan optimization and quality control, paving a new path for the application of deep learning in","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102409"},"PeriodicalIF":5.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1