首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Learning from crowds for automated histopathological image segmentation 从人群中学习,实现组织病理学图像自动分割
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-01-05 DOI: 10.1016/j.compmedimag.2024.102327
Miguel López-Pérez , Pablo Morales-Álvarez , Lee A.D. Cooper , Christopher Felicelli , Jeffery Goldstein , Brian Vadasz , Rafael Molina , Aggelos K. Katsaggelos

Automated semantic segmentation of histopathological images is an essential task in Computational Pathology (CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations. Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation and the annotators’ expertise is currently a commonly used approach. Unfortunately, this approach is frequently carried out by learning a different neural network for each annotator, which scales poorly when the number of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation. This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an annotator network (for learning the annotators’ expertise). We propose to estimate the annotators’ behavior with only one network that receives the annotator ID as input, achieving scalability on the number of annotators. Our family is composed of three different models for the annotator network. Within this family, we propose a novel modeling of the annotator network in the CR segmentation literature, which considers the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827, outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert labels (0.7723).

组织病理学图像的自动语义分割是计算病理学(CPATH)的一项重要任务。深度学习(DL)在处理这项任务时的主要局限性在于专家注释的稀缺性。众包(Crowdsourcing,CR)是一种很有前途的解决方案,它通过在一组(非专家)注释者之间分配标注工作来降低个人(专家)注释成本。在这种情况下提取知识具有挑战性,因为它涉及噪声注释。联合学习底层(专家)分割和标注者的专业知识是目前常用的方法。遗憾的是,这种方法通常是通过为每个注释者学习不同的神经网络来实现的,当注释者的数量增加时,这种方法的扩展性很差。因此,这种策略很难应用于实际的 CPATH 分割。本文针对组织病理学图像的 CR 分割提出了一系列新方法。我们的方法由两个耦合网络组成:一个分割网络(用于学习专家分割)和一个注释者网络(用于学习注释者的专业知识)。我们建议只用一个网络来估计注释者的行为,该网络接收注释者 ID 作为输入,从而实现注释者数量的可扩展性。我们的系列由三种不同的注释者网络模型组成。在这个系列中,我们在 CR 分割文献中提出了一种新的注释者网络模型,它考虑了图像的全局特征。我们在由几名医学生标注的三阴性乳腺癌图像的真实数据集上验证了我们的方法。我们的新 CR 建模得出的 Dice 系数为 0.7827,优于著名的 STAPLE(0.7039),与专家标签监督方法(0.7723)相比也具有竞争力。
{"title":"Learning from crowds for automated histopathological image segmentation","authors":"Miguel López-Pérez ,&nbsp;Pablo Morales-Álvarez ,&nbsp;Lee A.D. Cooper ,&nbsp;Christopher Felicelli ,&nbsp;Jeffery Goldstein ,&nbsp;Brian Vadasz ,&nbsp;Rafael Molina ,&nbsp;Aggelos K. Katsaggelos","doi":"10.1016/j.compmedimag.2024.102327","DOIUrl":"10.1016/j.compmedimag.2024.102327","url":null,"abstract":"<div><p>Automated semantic segmentation of histopathological images is an essential task in Computational Pathology (CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations. Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation and the annotators’ expertise is currently a commonly used approach. Unfortunately, this approach is frequently carried out by learning a different neural network for each annotator, which scales poorly when the number of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation. This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an annotator network (for learning the annotators’ expertise). We propose to estimate the annotators’ behavior with only one network that receives the annotator ID as input, achieving scalability on the number of annotators. Our family is composed of three different models for the annotator network. Within this family, we propose a novel modeling of the annotator network in the CR segmentation literature, which considers the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827, outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert labels (0.7723).</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000041/pdfft?md5=f2b9bb038211d99264cf03fb85d24146&pid=1-s2.0-S0895611124000041-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time multiscale regularization for nonlinear image registration 用于非线性图像配准的时间多尺度正则化
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-01-05 DOI: 10.1016/j.compmedimag.2024.102331
Lili Bao , Ke Chen , Dexing Kong , Shihui Ying , Tieyong Zeng

Regularization-based methods are commonly used for image registration. However, fixed regularizers have limitations in capturing details and describing the dynamic registration process. To address this issue, we propose a time multiscale registration framework for nonlinear image registration in this paper. Our approach replaces the fixed regularizer with a monotone decreasing sequence, and iteratively uses the residual of the previous step as the input for registration. Particularly, first, we introduce a dynamically varying regularization strategy that updates regularizers at each iteration and incorporates them with a multiscale framework. This approach guarantees an overall smooth deformation field in the initial stage of registration and fine-tunes local details as the images become more similar. We then deduce convergence analysis under certain conditions on the regularizers and parameters. Further, we introduce a TV-like regularizer to demonstrate the efficiency of our method. Finally, we compare our proposed multiscale algorithm with some existing methods on both synthetic images and pulmonary computed tomography (CT) images. The experimental results validate that our proposed algorithm outperforms the compared methods, especially in preserving details during image registration with sharp structures.

基于正则化的方法通常用于图像配准。然而,固定的正则化方法在捕捉细节和描述动态配准过程方面存在局限性。为了解决这个问题,我们在本文中提出了一种用于非线性图像配准的时间多尺度配准框架。我们的方法用单调递减序列取代了固定正则,并迭代使用上一步的残差作为配准的输入。首先,我们引入了一种动态变化的正则化策略,在每次迭代时更新正则,并将其与多尺度框架相结合。这种方法能在配准的初始阶段保证整体的平滑变形场,并在图像变得更加相似时对局部细节进行微调。然后,我们根据正则和参数的某些条件推导出收敛分析。此外,我们还引入了一种类似 TV 的正则器,以证明我们方法的效率。最后,我们在合成图像和肺部计算机断层扫描(CT)图像上比较了我们提出的多尺度算法和现有的一些方法。实验结果验证了我们提出的算法优于比较过的方法,尤其是在具有尖锐结构的图像配准过程中保留细节方面。
{"title":"Time multiscale regularization for nonlinear image registration","authors":"Lili Bao ,&nbsp;Ke Chen ,&nbsp;Dexing Kong ,&nbsp;Shihui Ying ,&nbsp;Tieyong Zeng","doi":"10.1016/j.compmedimag.2024.102331","DOIUrl":"10.1016/j.compmedimag.2024.102331","url":null,"abstract":"<div><p><span><span>Regularization-based methods are commonly used for image registration. However, fixed regularizers have limitations in capturing details and describing the dynamic registration process. To address this issue, we propose a time multiscale registration framework for nonlinear image registration in this paper. Our approach replaces the fixed regularizer with a monotone decreasing sequence, and iteratively uses the residual of the previous step as the input for registration. Particularly, first, we introduce a dynamically varying regularization strategy that updates regularizers at each iteration and incorporates them with a multiscale framework. This approach guarantees an overall smooth deformation field in the initial stage of registration and fine-tunes local details as the images become more similar. We then deduce </span>convergence analysis under certain conditions on the regularizers and parameters. Further, we introduce a TV-like regularizer to demonstrate the efficiency of our method. Finally, we compare our proposed multiscale algorithm with some existing methods on both </span>synthetic images<span> and pulmonary computed tomography (CT) images. The experimental results validate that our proposed algorithm outperforms the compared methods, especially in preserving details during image registration with sharp structures.</span></p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139101889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images MicroSegNet:在微型超声图像上进行前列腺分割的深度学习方法
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-01-05 DOI: 10.1016/j.compmedimag.2024.102326
Hongxu Jiang , Muhammad Imran , Preethika Muralidharan , Anjali Patel , Jake Pensa , Muxuan Liang , Tarik Benidir , Joseph R. Grajo , Jason P. Joseph , Russell Terry , John Michael DiBianco , Li-Ming Su , Yuyin Zhou , Wayne G. Brisbane , Wei Shao

Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis, prostate biopsy, and treatment planning. However, prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline. This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges. During the training process, MicroSegNet focuses more on regions that are hard to segment (hard regions), characterized by discrepancies between expert and non-expert annotations. We achieve this by proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns a larger weight to prediction errors in hard regions and a lower weight to prediction errors in easy regions. The AG-BCE loss was seamlessly integrated into the training process through the utilization of multi-scale deep supervision, enabling MicroSegNet to capture global contextual dependencies and local information at various scales. We trained our model using micro-US images from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model achieved a Dice coefficient of 0.939 and a Hausdorff distance of 2.02 mm, outperforming several state-of-the-art segmentation methods, as well as three human annotators with different experience levels. Our code is publicly available at https://github.com/mirthAI/MicroSegNet and our dataset is publicly available at https://zenodo.org/records/10475293.

微超声(micro-US)是一种新型的 29 兆赫超声技术,其分辨率是传统超声的 3-4 倍,有可能实现低成本、准确的前列腺癌诊断。准确的前列腺分割对于前列腺体积测量、癌症诊断、前列腺活检和治疗计划至关重要。然而,由于前列腺、膀胱和尿道中线之间的伪影和模糊边界,在显微超声波上进行前列腺分割具有挑战性。本文介绍的 MicroSegNet 是一种多尺度注释引导的变换器 UNet 模型,专门用于应对这些挑战。在训练过程中,MicroSegNet 更加关注难以分割的区域(困难区域),这些区域的特点是专家和非专家注释之间存在差异。为了实现这一目标,我们提出了注释引导的二元交叉熵(AG-BCE)损失,该损失为困难区域的预测错误分配较大权重,为容易区域的预测错误分配较小权重。通过利用多尺度深度监督,AG-BCE 损失被无缝集成到训练过程中,使 MicroSegNet 能够捕捉不同尺度的全局上下文依赖关系和局部信息。我们使用 55 名患者的微美国图像对模型进行了训练,随后在 20 名患者身上进行了评估。我们的 MicroSegNet 模型达到了 0.939 的 Dice 系数和 2.02 mm 的 Hausdorff 距离,优于几种最先进的分割方法以及三位具有不同经验水平的人类标注者。我们将公开我们的代码和数据集,以促进研究的透明度和合作。
{"title":"MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images","authors":"Hongxu Jiang ,&nbsp;Muhammad Imran ,&nbsp;Preethika Muralidharan ,&nbsp;Anjali Patel ,&nbsp;Jake Pensa ,&nbsp;Muxuan Liang ,&nbsp;Tarik Benidir ,&nbsp;Joseph R. Grajo ,&nbsp;Jason P. Joseph ,&nbsp;Russell Terry ,&nbsp;John Michael DiBianco ,&nbsp;Li-Ming Su ,&nbsp;Yuyin Zhou ,&nbsp;Wayne G. Brisbane ,&nbsp;Wei Shao","doi":"10.1016/j.compmedimag.2024.102326","DOIUrl":"10.1016/j.compmedimag.2024.102326","url":null,"abstract":"<div><p><span>Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique<span><span> that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer<span>. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis, </span></span>prostate biopsy<span><span>, and treatment planning. However, prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline. This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges. During the training process, MicroSegNet focuses more on regions that are hard to segment (hard regions), characterized by discrepancies between expert and non-expert annotations. We achieve this by proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns a larger weight to prediction errors in hard regions and a lower weight to prediction errors in easy regions. The AG-BCE loss was seamlessly integrated into the training process through the utilization of multi-scale deep supervision, enabling MicroSegNet to capture global contextual dependencies and local information at various scales. We trained our model using micro-US images from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model achieved a Dice coefficient of 0.939 and a </span>Hausdorff distance<span> of 2.02 mm, outperforming several state-of-the-art segmentation methods, as well as three human annotators with different experience levels. Our code is publicly available at </span></span></span></span><span>https://github.com/mirthAI/MicroSegNet</span><svg><path></path></svg> and our dataset is publicly available at <span>https://zenodo.org/records/10475293</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139101892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One model, two brains: Automatic fetal brain extraction from MR images of twins 一个模型,两个大脑:从双胞胎磁共振图像中自动提取胎儿大脑
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-01-05 DOI: 10.1016/j.compmedimag.2024.102330
Jian Chen , Ranlin Lu , Bin Jing , He Zhang , Geng Chen , Dinggang Shen

Fetal brain extraction from magnetic resonance (MR) images is of great importance for both clinical applications and neuroscience studies. However, it is a challenging task, especially when dealing with twins, which are commonly existing in pregnancy. Currently, there is no brain extraction method dedicated to twins, raising significant demand to develop an effective twin fetal brain extraction method. To this end, we propose the first twin fetal brain extraction framework, which possesses three novel features. First, to narrow down the region of interest and preserve structural information between the two brains in twin fetal MR images, we take advantage of an advanced object detector to locate all the brains in twin fetal MR images at once. Second, we propose a Twin Fetal Brain Extraction Network (TFBE-Net) to further suppress insignificant features for segmenting brain regions. Finally, we propose a Two-step Training Strategy (TTS) to learn correlation features of the single fetal brain for further improving the performance of TFBE-Net. We validate the proposed framework on a twin fetal brain dataset. The experiments show that our framework achieves promising performance on both quantitative and qualitative evaluations, and outperforms state-of-the-art methods for fetal brain extraction.

从磁共振(MR)图像中提取胎儿大脑对于临床应用和神经科学研究都非常重要。然而,这是一项具有挑战性的任务,尤其是在处理妊娠期常见的双胞胎时。目前,还没有专门针对双胞胎的大脑提取方法,这就对开发一种有效的双胞胎胎儿大脑提取方法提出了巨大的需求。为此,我们提出了首个双胞胎胎儿大脑提取框架,它具有三个新特点。首先,为了缩小感兴趣区域并保留双胎核磁共振图像中两个大脑之间的结构信息,我们利用先进的目标检测器一次性定位双胎核磁共振图像中的所有大脑。其次,我们提出了双胎大脑提取网络(TFBE-Net),以进一步抑制不重要的特征,从而分割大脑区域。最后,我们提出了两步训练策略(TTS)来学习单个胎儿大脑的相关特征,从而进一步提高 TFBE-Net 的性能。我们在双胎大脑数据集上验证了所提出的框架。实验结果表明,我们的框架在定量和定性评估方面都取得了可喜的成绩,在胎儿大脑提取方面优于最先进的方法。
{"title":"One model, two brains: Automatic fetal brain extraction from MR images of twins","authors":"Jian Chen ,&nbsp;Ranlin Lu ,&nbsp;Bin Jing ,&nbsp;He Zhang ,&nbsp;Geng Chen ,&nbsp;Dinggang Shen","doi":"10.1016/j.compmedimag.2024.102330","DOIUrl":"10.1016/j.compmedimag.2024.102330","url":null,"abstract":"<div><p>Fetal brain extraction from magnetic resonance (MR) images is of great importance for both clinical applications and neuroscience studies. However, it is a challenging task, especially when dealing with twins, which are commonly existing in pregnancy. Currently, there is no brain extraction method dedicated to twins, raising significant demand to develop an effective twin fetal brain extraction method. To this end, we propose the first twin fetal brain extraction framework, which possesses three novel features. First, to narrow down the region of interest and preserve structural information between the two brains in twin fetal MR images, we take advantage of an advanced object detector to locate all the brains in twin fetal MR images at once. Second, we propose a Twin Fetal Brain Extraction Network (TFBE-Net) to further suppress insignificant features for segmenting brain regions. Finally, we propose a Two-step Training Strategy (TTS) to learn correlation features of the single fetal brain for further improving the performance of TFBE-Net. We validate the proposed framework on a twin fetal brain dataset. The experiments show that our framework achieves promising performance on both quantitative and qualitative evaluations, and outperforms state-of-the-art methods for fetal brain extraction.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uninformed Teacher-Student for hard-samples distillation in weakly supervised mitosis localization 弱监督有丝分裂定位中用于硬样本提炼的无信息师生关系
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-01-04 DOI: 10.1016/j.compmedimag.2024.102328
Claudio Fernandez-Martín , Julio Silva-Rodriguez , Umay Kiraz , Sandra Morales , Emiel A.M. Janssen , Valery Naranjo

Background and Objective:

Mitotic activity is a crucial biomarker for diagnosing and predicting outcomes for different types of cancers, particularly breast cancer. However, manual mitosis counting is challenging and time-consuming for pathologists, with moderate reproducibility due to biopsy slide size, low mitotic cell density, and pattern heterogeneity. In recent years, deep learning methods based on convolutional neural networks (CNNs) have been proposed to address these limitations. Nonetheless, these methods have been hampered by the available data labels, which usually consist only of the centroids of mitosis, and by the incoming noise from annotated hard negatives. As a result, complex algorithms with multiple stages are often required to refine the labels at the pixel level and reduce the number of false positives.

Methods:

This article presents a novel weakly supervised approach for mitosis detection that utilizes only image-level labels on histological hematoxylin and eosin (H&E) images, avoiding the need for complex labeling scenarios. Also, an Uninformed Teacher-Student (UTS) pipeline is introduced to detect and distill hard samples by comparing weakly supervised localizations and the annotated centroids, using strong augmentations to enhance uncertainty. Additionally, an automatic proliferation score is proposed that mimicks the pathologist-annotated mitotic activity index (MAI). The proposed approach is evaluated on three publicly available datasets for mitosis detection on breast histology samples, and two datasets for mitotic activity counting in whole-slide images.

Results:

The proposed framework achieves competitive performance with relevant prior literature in all the datasets used for evaluation without explicitly using the mitosis location information during training. This approach challenges previous methods that rely on strong mitosis location information and multiple stages to refine false positives. Furthermore, the proposed pipeline for hard-sample distillation demonstrates promising dataset-specific improvements. Concretely, when the annotation has not been thoroughly refined by multiple pathologists, the UTS model offers improvements of up to 4% in mitosis localization, thanks to the detection and distillation of uncertain cases. Concerning the mitosis counting task, the proposed automatic proliferation score shows a moderate positive correlation with the MAI annotated by pathologists at the biopsy level on two external datasets.

Conclusions:

The proposed Uninformed Teacher-Student pipeline leverages strong augmentations to distill uncertain samples and measure dissimilarities between predicted and annotated mitosis. Results demonstrate the feasibility of the weakly supervised approach and highlight its potential as an objective evaluation tool for tumo

背景与目的:有丝分裂活性是诊断和预测不同类型癌症(尤其是乳腺癌)预后的重要生物标志物。然而,对于病理学家来说,人工有丝分裂计数既具有挑战性又耗费时间,而且由于活检切片尺寸、有丝分裂细胞密度低和模式异质性等原因,可重复性较差。近年来,人们提出了基于卷积神经网络(CNN)的深度学习方法来解决这些局限性。然而,这些方法一直受到可用数据标签(通常只包括有丝分裂的中心点)和来自注释硬阴性的噪声的阻碍。方法:本文介绍了一种新型的弱监督有丝分裂检测方法,该方法仅利用组织学苏木精和伊红(H&E)图像上的图像级标签,避免了复杂的标签方案。此外,还引入了无信息师生(UTS)管道,通过比较弱监督定位和注释中心点来检测和提炼硬样本,并使用强增强来提高不确定性。此外,还提出了模仿病理学家注释的有丝分裂活动指数(MAI)的自动增殖评分。结果:在所有用于评估的数据集中,提议的框架在训练过程中未明确使用有丝分裂位置信息的情况下,取得了与相关先前文献相当的性能。这种方法挑战了以往依赖强大的有丝分裂位置信息和多个阶段来完善误报的方法。此外,所提出的硬样本提炼管道还展示了针对特定数据集的改进。具体来说,当多个病理学家尚未对注释进行彻底完善时,UTS 模型通过对不确定病例的检测和提炼,在有丝分裂定位方面的改进可达 4%。关于有丝分裂计数任务,在两个外部数据集上,所提出的自动增殖评分与病理学家在活检水平上注释的有丝分裂指数(MAI)呈中度正相关。结果证明了弱监督方法的可行性,并凸显了其作为肿瘤增殖客观评估工具的潜力。
{"title":"Uninformed Teacher-Student for hard-samples distillation in weakly supervised mitosis localization","authors":"Claudio Fernandez-Martín ,&nbsp;Julio Silva-Rodriguez ,&nbsp;Umay Kiraz ,&nbsp;Sandra Morales ,&nbsp;Emiel A.M. Janssen ,&nbsp;Valery Naranjo","doi":"10.1016/j.compmedimag.2024.102328","DOIUrl":"10.1016/j.compmedimag.2024.102328","url":null,"abstract":"<div><h3>Background and Objective:</h3><p>Mitotic activity is a crucial biomarker for diagnosing and predicting outcomes for different types of cancers, particularly breast cancer. However, manual mitosis counting is challenging and time-consuming for pathologists, with moderate reproducibility due to biopsy slide size, low mitotic cell density, and pattern heterogeneity. In recent years, deep learning methods based on convolutional neural networks (CNNs) have been proposed to address these limitations. Nonetheless, these methods have been hampered by the available data labels, which usually consist only of the centroids of mitosis, and by the incoming noise from annotated hard negatives. As a result, complex algorithms with multiple stages are often required to refine the labels at the pixel level and reduce the number of false positives.</p></div><div><h3>Methods:</h3><p>This article presents a novel weakly supervised approach for mitosis detection that utilizes only image-level labels on histological hematoxylin and eosin (H&amp;E) images, avoiding the need for complex labeling scenarios. Also, an Uninformed Teacher-Student (UTS) pipeline is introduced to detect and distill hard samples by comparing weakly supervised localizations and the annotated centroids, using strong augmentations to enhance uncertainty. Additionally, an automatic proliferation score is proposed that mimicks the pathologist-annotated mitotic activity index (MAI). The proposed approach is evaluated on three publicly available datasets for mitosis detection on breast histology samples, and two datasets for mitotic activity counting in whole-slide images.</p></div><div><h3>Results:</h3><p>The proposed framework achieves competitive performance with relevant prior literature in all the datasets used for evaluation without explicitly using the mitosis location information during training. This approach challenges previous methods that rely on strong mitosis location information and multiple stages to refine false positives. Furthermore, the proposed pipeline for hard-sample distillation demonstrates promising dataset-specific improvements. Concretely, when the annotation has not been thoroughly refined by multiple pathologists, the UTS model offers improvements of up to <span><math><mrow><mo>∼</mo><mn>4</mn><mtext>%</mtext></mrow></math></span> in mitosis localization, thanks to the detection and distillation of uncertain cases. Concerning the mitosis counting task, the proposed automatic proliferation score shows a moderate positive correlation with the MAI annotated by pathologists at the biopsy level on two external datasets.</p></div><div><h3>Conclusions:</h3><p>The proposed Uninformed Teacher-Student pipeline leverages strong augmentations to distill uncertain samples and measure dissimilarities between predicted and annotated mitosis. Results demonstrate the feasibility of the weakly supervised approach and highlight its potential as an objective evaluation tool for tumo","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000053/pdfft?md5=b67a222d63f1f1ae430131a388d9d719&pid=1-s2.0-S0895611124000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139093686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMILE: Siamese Multi-scale Interactive-representation LEarning for Hierarchical Diffeomorphic Deformable image registration SMILE:用于分层差异变形可变形图像配准的连体多尺度交互式呈现学习(Siamese Multi-scale Interactive-representation LEarning
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-01-01 DOI: 10.1016/j.compmedimag.2023.102322
Xiaoru Gao, Guoyan Zheng

Deformable medical image registration plays an important role in many clinical applications. It aims to find a dense deformation field to establish point-wise correspondences between a pair of fixed and moving images. Recently, unsupervised deep learning-based registration methods have drawn more and more attention because of fast inference at testing stage. Despite remarkable progress, existing deep learning-based methods suffer from several limitations including: (a) they often overlook the explicit modeling of feature correspondences due to limited receptive fields; (b) the performance on image pairs with large spatial displacements is still limited since the dense deformation field is regressed from features learned by local convolutions; and (c) desirable properties, including topology-preservation and the invertibility of transformation, are often ignored. To address above limitations, we propose a novel Convolutional Neural Network (CNN) consisting of a Siamese Multi-scale Interactive-representation LEarning (SMILE) encoder and a Hierarchical Diffeomorphic Deformation (HDD) decoder. Specifically, the SMILE encoder aims for effective feature representation learning and spatial correspondence establishing while the HDD decoder seeks to regress the dense deformation field in a coarse-to-fine manner. We additionally propose a novel Local Invertible Loss (LIL) to encourage topology-preservation and local invertibility of the regressed transformation while keeping high registration accuracy. Extensive experiments conducted on two publicly available brain image datasets demonstrate the superiority of our method over the state-of-the-art (SOTA) approaches. Specifically, on the Neurite-OASIS dataset, our method achieved an average DSC of 0.815 and an average ASSD of 0.633 mm.

可变形医学图像配准在许多临床应用中发挥着重要作用。其目的是找到一个密集的形变场,以建立一对固定图像和移动图像之间的点对应关系。最近,基于无监督深度学习的配准方法因其在测试阶段的快速推理而受到越来越多的关注。尽管取得了令人瞩目的进展,但现有的基于深度学习的方法仍存在一些局限性,包括:(a)由于感受野有限,这些方法往往忽略了对特征对应关系的明确建模;(b)由于稠密形变场是通过局部卷积学习到的特征回归而来的,因此在具有较大空间位移的图像对上的性能仍然有限;以及(c)包括拓扑保留和变换可逆性在内的理想特性往往被忽视。针对上述局限性,我们提出了一种新型卷积神经网络(CNN),该网络由暹罗多尺度交互式呈现学习(SMILE)编码器和分层差异变形(HDD)解码器组成。具体来说,SMILE 编码器旨在实现有效的特征表示学习和空间对应关系的建立,而 HDD 解码器则旨在以从粗到细的方式回归密集形变场。此外,我们还提出了一种新颖的局部可逆损耗(LIL),以鼓励拓扑保护和回归变换的局部可逆性,同时保持较高的注册精度。在两个公开的大脑图像数据集上进行的广泛实验证明,我们的方法优于最先进的(SOTA)方法。具体来说,在 Neurite-OASIS 数据集上,我们的方法实现了 0.815 的平均 DSC 值和 0.633 mm 的平均 ASSD 值。
{"title":"SMILE: Siamese Multi-scale Interactive-representation LEarning for Hierarchical Diffeomorphic Deformable image registration","authors":"Xiaoru Gao,&nbsp;Guoyan Zheng","doi":"10.1016/j.compmedimag.2023.102322","DOIUrl":"10.1016/j.compmedimag.2023.102322","url":null,"abstract":"<div><p>Deformable medical image registration plays an important role in many clinical applications. It aims to find a dense deformation field to establish point-wise correspondences between a pair of fixed and moving images. Recently, unsupervised deep learning-based registration methods have drawn more and more attention because of fast inference at testing stage. Despite remarkable progress, existing deep learning-based methods suffer from several limitations including: (a) they often overlook the explicit modeling of feature correspondences due to limited receptive fields; (b) the performance on image pairs with large spatial displacements is still limited since the dense deformation field is regressed from features learned by local convolutions; and (c) desirable properties, including topology-preservation and the invertibility of transformation, are often ignored. To address above limitations, we propose a novel Convolutional Neural Network (CNN) consisting of a Siamese Multi-scale Interactive-representation LEarning (SMILE) encoder and a Hierarchical Diffeomorphic Deformation (HDD) decoder. Specifically, the SMILE encoder aims for effective feature representation learning and spatial correspondence establishing while the HDD decoder seeks to regress the dense deformation field in a coarse-to-fine manner. We additionally propose a novel Local Invertible Loss (LIL) to encourage topology-preservation and local invertibility of the regressed transformation while keeping high registration accuracy. Extensive experiments conducted on two publicly available brain image datasets demonstrate the superiority of our method over the state-of-the-art (SOTA) approaches. Specifically, on the Neurite-OASIS dataset, our method achieved an average DSC of 0.815 and an average ASSD of 0.633 mm.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611123001404/pdfft?md5=2e5162b39ce7f3bc230a9176515c4292&pid=1-s2.0-S0895611123001404-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139055544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking automatic segmentation of gross target volume from a decoupling perspective 从解耦角度反思目标总量的自动分割
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2023-12-29 DOI: 10.1016/j.compmedimag.2023.102323
Jun Shi , Zhaohui Wang , Shulan Ruan , Minfan Zhao , Ziqi Zhu , Hongyu Kan , Hong An , Xudong Xue , Bing Yan

Accurate and reliable segmentation of Gross Target Volume (GTV) is critical in cancer Radiation Therapy (RT) planning, but manual delineation is time-consuming and subject to inter-observer variations. Recently, deep learning methods have achieved remarkable success in medical image segmentation. However, due to the low image contrast and extreme pixel imbalance between GTV and adjacent tissues, most existing methods usually obtained limited performance on automatic GTV segmentation. In this paper, we propose a Heterogeneous Cascade Framework (HCF) from a decoupling perspective, which decomposes the GTV segmentation into independent recognition and segmentation subtasks. The former aims to screen out the abnormal slices containing GTV, while the latter performs pixel-wise segmentation of these slices. With the decoupled two-stage framework, we can efficiently filter normal slices to reduce false positives. To further improve the segmentation performance, we design a multi-level Spatial Alignment Network (SANet) based on the feature pyramid structure, which introduces a spatial alignment module into the decoder to compensate for the information loss caused by downsampling. Moreover, we propose a Combined Regularization (CR) loss and Balance-Sampling Strategy (BSS) to alleviate the pixel imbalance problem and improve network convergence. Extensive experiments on two public datasets of StructSeg2019 challenge demonstrate that our method outperforms state-of-the-art methods, especially with significant advantages in reducing false positives and accurately segmenting small objects. The code is available at https://github.com/shijun18/GTV_AutoSeg.

在癌症放射治疗(RT)计划中,准确、可靠地分割总靶体积(GTV)至关重要,但人工划线耗时且受观察者之间差异的影响。最近,深度学习方法在医学影像分割方面取得了显著成就。然而,由于 GTV 与邻近组织之间的图像对比度低且像素极不平衡,大多数现有方法在自动 GTV 分割方面的性能通常有限。本文从解耦的角度出发,提出了异构级联框架(HCF),将 GTV 分割分解为独立的识别和分割子任务。前者旨在筛选出含有 GTV 的异常切片,后者则对这些切片进行像素化分割。通过解耦的两阶段框架,我们可以有效地筛选出正常切片,从而减少误报。为了进一步提高分割性能,我们设计了基于特征金字塔结构的多层次空间配准网络(SANet),在解码器中引入空间配准模块,以弥补降采样造成的信息损失。此外,我们还提出了联合正则化(CR)损失和平衡采样策略(BSS),以缓解像素不平衡问题,提高网络收敛性。在 StructSeg2019 挑战赛的两个公开数据集上进行的广泛实验表明,我们的方法优于最先进的方法,尤其是在减少误报和准确分割小物体方面具有显著优势。代码见 https://github.com/shijun18/GTV_AutoSeg。
{"title":"Rethinking automatic segmentation of gross target volume from a decoupling perspective","authors":"Jun Shi ,&nbsp;Zhaohui Wang ,&nbsp;Shulan Ruan ,&nbsp;Minfan Zhao ,&nbsp;Ziqi Zhu ,&nbsp;Hongyu Kan ,&nbsp;Hong An ,&nbsp;Xudong Xue ,&nbsp;Bing Yan","doi":"10.1016/j.compmedimag.2023.102323","DOIUrl":"10.1016/j.compmedimag.2023.102323","url":null,"abstract":"<div><p><span><span>Accurate and reliable segmentation of Gross Target Volume (GTV) is critical in cancer Radiation Therapy (RT) planning, but manual delineation is time-consuming and subject to inter-observer variations. Recently, deep learning methods<span><span> have achieved remarkable success in medical image segmentation. However, due to the low image contrast and extreme pixel imbalance between GTV and adjacent tissues, most existing methods usually obtained limited performance on automatic GTV segmentation. In this paper, we propose a Heterogeneous Cascade Framework (HCF) from a decoupling perspective, which decomposes the GTV segmentation into independent recognition and segmentation subtasks. The former aims to screen out the abnormal slices containing GTV, while the latter performs pixel-wise segmentation of these slices. With the decoupled two-stage framework, we can efficiently filter normal slices to reduce </span>false positives<span>. To further improve the segmentation performance, we design a multi-level Spatial Alignment Network (SANet) based on the feature pyramid structure, which introduces a spatial alignment module into the decoder to compensate for the information loss caused by downsampling. Moreover, we propose a Combined </span></span></span>Regularization (CR) loss and Balance-Sampling Strategy (BSS) to alleviate the pixel imbalance problem and improve network convergence. Extensive experiments on two public datasets of StructSeg2019 challenge demonstrate that our method outperforms state-of-the-art methods, especially with significant advantages in reducing false positives and accurately segmenting small objects. The code is available at </span><span>https://github.com/shijun18/GTV_AutoSeg</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139071210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are you sure it’s an artifact? Artifact detection and uncertainty quantification in histological images 您确定这是伪影吗?组织学图像中的伪影检测和不确定性量化
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2023-12-20 DOI: 10.1016/j.compmedimag.2023.102321
Neel Kanwal , Miguel López-Pérez , Umay Kiraz , Tahlita C.M. Zuiverloon , Rafael Molina , Kjersti Engan

Modern cancer diagnostics involves extracting tissue specimens from suspicious areas and conducting histotechnical procedures to prepare a digitized glass slide, called Whole Slide Image (WSI), for further examination. These procedures frequently introduce different types of artifacts in the obtained WSI, and histological artifacts might influence Computational Pathology (CPATH) systems further down to a diagnostic pipeline if not excluded or handled. Deep Convolutional Neural Networks (DCNNs) have achieved promising results for the detection of some WSI artifacts, however, they do not incorporate uncertainty in their predictions. This paper proposes an uncertainty-aware Deep Kernel Learning (DKL) model to detect blurry areas and folded tissues, two types of artifacts that can appear in WSIs. The proposed probabilistic model combines a CNN feature extractor and a sparse Gaussian Processes (GPs) classifier, which improves the performance of current state-of-the-art artifact detection DCNNs and provides uncertainty estimates. We achieved 0.996 and 0.938 F1 scores for blur and folded tissue detection on unseen data, respectively. In extensive experiments, we validated the DKL model on unseen data from external independent cohorts with different staining and tissue types, where it outperformed DCNNs. Interestingly, the DKL model is more confident in the correct predictions and less in the wrong ones. The proposed DKL model can be integrated into the preprocessing pipeline of CPATH systems to provide reliable predictions and possibly serve as a quality control tool.

现代癌症诊断涉及从可疑部位提取组织标本,并进行组织技术处理,以制备数字化玻璃载玻片,即全片图像(WSI),供进一步检查。这些程序经常会在所获得的 WSI 中引入不同类型的伪影,如果不加以排除或处理,组织学伪影可能会影响计算病理系统的进一步诊断流程。深度卷积神经网络(DCNN)在检测某些 WSI 伪影方面取得了令人鼓舞的成果,但它们并没有将不确定性纳入预测。本文提出了一种不确定性感知深度核学习(DKL)模型,用于检测 WSI 中可能出现的两类伪影--模糊区域和折叠组织。所提出的概率模型结合了 CNN 特征提取器和稀疏高斯过程(GPs)分类器,提高了当前最先进的伪影检测 DCNN 的性能,并提供了不确定性估计。我们在未见数据的模糊和折叠组织检测中分别取得了 0.996 和 0.938 的 F1 分数。在大量实验中,我们在来自外部独立队列的不同染色和组织类型的未见数据上验证了 DKL 模型方法,其性能优于 DCNN。有趣的是,DKL 模型对正确预测更有信心,而对错误预测则信心不足。提议的 DKL 模型可集成到 CPATH 系统的预处理管道中,以提供可靠的预测,并可作为质量控制工具。
{"title":"Are you sure it’s an artifact? Artifact detection and uncertainty quantification in histological images","authors":"Neel Kanwal ,&nbsp;Miguel López-Pérez ,&nbsp;Umay Kiraz ,&nbsp;Tahlita C.M. Zuiverloon ,&nbsp;Rafael Molina ,&nbsp;Kjersti Engan","doi":"10.1016/j.compmedimag.2023.102321","DOIUrl":"10.1016/j.compmedimag.2023.102321","url":null,"abstract":"<div><p>Modern cancer diagnostics involves extracting tissue specimens from suspicious areas and conducting histotechnical procedures to prepare a digitized glass slide, called Whole Slide Image (WSI), for further examination. These procedures frequently introduce different types of artifacts in the obtained WSI, and histological artifacts might influence Computational Pathology (CPATH) systems further down to a diagnostic pipeline if not excluded or handled. Deep Convolutional Neural Networks (DCNNs) have achieved promising results for the detection of some WSI artifacts, however, they do not incorporate uncertainty in their predictions. This paper proposes an uncertainty-aware Deep Kernel Learning (DKL) model to detect blurry areas and folded tissues, two types of artifacts that can appear in WSIs. The proposed probabilistic model combines a CNN feature extractor and a sparse Gaussian Processes (GPs) classifier, which improves the performance of current state-of-the-art artifact detection DCNNs and provides uncertainty estimates. We achieved 0.996 and 0.938 F1 scores for blur and folded tissue detection on unseen data, respectively. In extensive experiments, we validated the DKL model on unseen data from external independent cohorts with different staining and tissue types, where it outperformed DCNNs. Interestingly, the DKL model is more confident in the correct predictions and less in the wrong ones. The proposed DKL model can be integrated into the preprocessing pipeline of CPATH systems to provide reliable predictions and possibly serve as a quality control tool.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611123001398/pdfft?md5=7e6253c4ecbe447610eb3aebb7d05fd7&pid=1-s2.0-S0895611123001398-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138820778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of mediastinal lymph node segmentation of heterogeneous CT data with full and weak supervision 评估采用完全监督和弱监督对异构 CT 数据进行纵隔淋巴结分割的效果
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2023-12-15 DOI: 10.1016/j.compmedimag.2023.102312
Alireza Mehrtash , Erik Ziegler , Tagwa Idris , Bhanusupriya Somarouthu , Trinity Urban , Ann S. LaCasce , Heather Jacene , Annick D. Van Den Abbeele , Steve Pieper , Gordon Harris , Ron Kikinis , Tina Kapur

Accurate lymph node size estimation is critical for staging cancer patients, initial therapeutic management, and assessing response to therapy. Current standard practice for quantifying lymph node size is based on a variety of criteria that use uni-directional or bi-directional measurements. Segmentation in 3D can provide more accurate evaluations of the lymph node size. Fully convolutional neural networks (FCNs) have achieved state-of-the-art results in segmentation for numerous medical imaging applications, including lymph node segmentation. Adoption of deep learning segmentation models in clinical trials often faces numerous challenges. These include lack of pixel-level ground truth annotations for training, generalizability of the models on unseen test domains due to the heterogeneity of test cases and variation of imaging parameters. In this paper, we studied and evaluated the performance of lymph node segmentation models on a dataset that was completely independent of the one used to create the models. We analyzed the generalizability of the models in the face of a heterogeneous dataset and assessed the potential effects of different disease conditions and imaging parameters. Furthermore, we systematically compared fully-supervised and weakly-supervised methods in this context. We evaluated the proposed methods using an independent dataset comprising 806 mediastinal lymph nodes from 540 unique patients. The results show that performance achieved on the independent test set is comparable to that on the training set. Furthermore, neither the underlying disease nor the heterogeneous imaging parameters impacted the performance of the models. Finally, the results indicate that our weakly-supervised method attains 90%− 91% of the performance achieved by the fully supervised training.

准确估计淋巴结大小对于癌症患者的分期、初始治疗管理和评估治疗反应至关重要。目前量化淋巴结大小的标准做法是基于各种使用单向或双向测量的标准。三维分割可以更准确地评估淋巴结的大小。全卷积神经网络(FCN)在包括淋巴结分割在内的众多医学影像应用的分割方面取得了最先进的成果。在临床试验中采用深度学习分割模型往往面临诸多挑战。这些挑战包括缺乏用于训练的像素级地面真实注释,由于测试案例的异质性和成像参数的变化,模型在未见测试域上的泛化能力。在本文中,我们研究并评估了淋巴结分割模型在完全独立于用于创建模型的数据集上的性能。面对异构数据集,我们分析了模型的通用性,并评估了不同疾病条件和成像参数的潜在影响。此外,我们还系统地比较了完全监督和弱监督方法。我们使用一个独立的数据集对所提出的方法进行了评估,该数据集由来自 540 名患者的 806 个纵隔淋巴结组成。结果表明,独立测试集上的性能与训练集上的性能相当。此外,潜在疾病和异质成像参数都不会影响模型的性能。最后,结果表明,我们的弱监督方法达到了完全监督训练性能的 90%-91% 。
{"title":"Evaluation of mediastinal lymph node segmentation of heterogeneous CT data with full and weak supervision","authors":"Alireza Mehrtash ,&nbsp;Erik Ziegler ,&nbsp;Tagwa Idris ,&nbsp;Bhanusupriya Somarouthu ,&nbsp;Trinity Urban ,&nbsp;Ann S. LaCasce ,&nbsp;Heather Jacene ,&nbsp;Annick D. Van Den Abbeele ,&nbsp;Steve Pieper ,&nbsp;Gordon Harris ,&nbsp;Ron Kikinis ,&nbsp;Tina Kapur","doi":"10.1016/j.compmedimag.2023.102312","DOIUrl":"10.1016/j.compmedimag.2023.102312","url":null,"abstract":"<div><p>Accurate lymph node size estimation is critical for staging cancer patients, initial therapeutic management, and assessing response to therapy. Current standard practice for quantifying lymph node size is based on a variety of criteria that use uni-directional or bi-directional measurements. Segmentation in 3D can provide more accurate evaluations of the lymph node size. Fully convolutional neural networks (FCNs) have achieved state-of-the-art results in segmentation for numerous medical imaging applications, including lymph node segmentation. Adoption of deep learning segmentation models in clinical trials often faces numerous challenges. These include lack of pixel-level ground truth annotations for training, generalizability of the models on unseen test domains due to the heterogeneity of test cases and variation of imaging parameters. In this paper, we studied and evaluated the performance of lymph node segmentation models on a dataset that was completely independent of the one used to create the models. We analyzed the generalizability of the models in the face of a heterogeneous dataset and assessed the potential effects of different disease conditions and imaging parameters. Furthermore, we systematically compared fully-supervised and weakly-supervised methods in this context. We evaluated the proposed methods using an independent dataset comprising 806 mediastinal lymph nodes from 540 unique patients. The results show that performance achieved on the independent test set is comparable to that on the training set. Furthermore, neither the underlying disease nor the heterogeneous imaging parameters impacted the performance of the models. Finally, the results indicate that our weakly-supervised method attains 90%− 91% of the performance achieved by the fully supervised training.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611123001301/pdfft?md5=677d6a05785c0117476811d1da073a9c&pid=1-s2.0-S0895611123001301-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138685157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for report generation on chest X-ray images 利用深度学习生成胸部 X 光图像报告
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2023-12-14 DOI: 10.1016/j.compmedimag.2023.102320
Mohammed Yasser Ouis, Moulay A. Akhloufi

Medical imaging, specifically chest X-ray image analysis, is a crucial component of early disease detection and screening in healthcare. Deep learning techniques, such as convolutional neural networks (CNNs), have emerged as powerful tools for computer-aided diagnosis (CAD) in chest X-ray image analysis. These techniques have shown promising results in automating tasks such as classification, detection, and segmentation of abnormalities in chest X-ray images, with the potential to surpass human radiologists. In this review, we provide an overview of the importance of chest X-ray image analysis, historical developments, impact of deep learning techniques, and availability of labeled databases. We specifically focus on advancements and challenges in radiology report generation using deep learning, highlighting potential future advancements in this area. The use of deep learning for report generation has the potential to reduce the burden on radiologists, improve patient care, and enhance the accuracy and efficiency of chest X-ray image analysis in medical imaging.

医学成像,特别是胸部 X 光图像分析,是医疗保健中早期疾病检测和筛查的重要组成部分。卷积神经网络(CNN)等深度学习技术已成为胸部 X 光图像分析中计算机辅助诊断(CAD)的强大工具。这些技术在胸部 X 光图像异常的自动分类、检测和分割等任务方面取得了可喜的成果,有望超越人类放射科医生。在本综述中,我们将概述胸部 X 光图像分析的重要性、历史发展、深度学习技术的影响以及标注数据库的可用性。我们特别关注使用深度学习生成放射学报告方面的进展和挑战,并强调了该领域未来可能取得的进展。使用深度学习生成报告有可能减轻放射科医生的负担,改善患者护理,提高医学影像中胸部 X 光图像分析的准确性和效率。
{"title":"Deep learning for report generation on chest X-ray images","authors":"Mohammed Yasser Ouis,&nbsp;Moulay A. Akhloufi","doi":"10.1016/j.compmedimag.2023.102320","DOIUrl":"10.1016/j.compmedimag.2023.102320","url":null,"abstract":"<div><p>Medical imaging, specifically chest X-ray image analysis, is a crucial component of early disease detection and screening in healthcare. Deep learning techniques, such as convolutional neural networks (CNNs), have emerged as powerful tools for computer-aided diagnosis (CAD) in chest X-ray image analysis. These techniques have shown promising results in automating tasks such as classification, detection, and segmentation of abnormalities in chest X-ray images, with the potential to surpass human radiologists. In this review, we provide an overview of the importance of chest X-ray image analysis, historical developments, impact of deep learning techniques, and availability of labeled databases. We specifically focus on advancements and challenges in radiology report generation using deep learning, highlighting potential future advancements in this area. The use of deep learning for report generation has the potential to reduce the burden on radiologists, improve patient care, and enhance the accuracy and efficiency of chest X-ray image analysis in medical imaging.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611123001386/pdfft?md5=8cc0f8b45592e15382ad6a881c46d294&pid=1-s2.0-S0895611123001386-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138684863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1