首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Image database with slides prepared by the Ziehl-Neelsen method for training automated detection and counting systems for tuberculosis bacilli. 用Ziehl-Neelsen方法制作的图像数据库,用于训练结核杆菌的自动检测和计数系统。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-06-13 DOI: 10.1117/1.JMI.12.3.034505
João Victor Boechat Gomide, Thales Francisco Mota Carvalho, Élida Aparecida Leal, Lida Jouca de Assis Figueiredo, Nauhara Vieira de Castro Barroso, Júnia Pessoa Tarabal, Cláudio José Augusto

Purpose: We aim to provide a robust dataset for training automated systems to detect tuberculosis bacilli using Ziehl-Neelsen stained slides. By making this dataset available, a critical gap in the availability of public datasets that can be used for developing and testing artificial intelligence techniques for tuberculosis diagnosis is addressed. Our rationale is grounded in the urgent need for diagnostic tools that can enhance tuberculosis diagnosis quickly and efficiently, especially in resource-limited settings.

Approach: The Ziehl-Neelsen method was used to prepare 362 slides, which were manually read. According to the World Health Organization's guidelines for performing bacilloscopy for tuberculosis diagnosis, experts annotated each slide to diagnose it as negative or positive. In addition, selected images underwent a detailed annotation process aimed at pinpointing the location of each bacillus and cluster within each image.

Results: The database consists of three directories. The first contains all the images, separated by slide, and indicates whether it is negative or the number of crosses if positive, for each slide. The second directory contains the 502 images selected for training automated systems, with each bacillus's position annotated and the Python code used. All the image fragments (positive and negative patches) used in the models' training, validation, and testing stages are available in the third directory.

Conclusions: The development of this annotated image database represents a significant advancement in tuberculosis diagnosis. By providing a high-quality and accessible resource to the scientific community, we enhance existing diagnostic tools and facilitate the development of automated technologies.

目的:我们的目标是提供一个强大的数据集,用于训练使用Ziehl-Neelsen染色载玻片检测结核杆菌的自动化系统。通过提供这一数据集,解决了可用于开发和测试结核病诊断人工智能技术的公共数据集可用性方面的一个重大差距。我们的理由是迫切需要能够快速有效地加强结核病诊断的诊断工具,特别是在资源有限的情况下。方法:采用Ziehl-Neelsen法制备362张载玻片,手工读取。根据世界卫生组织的结核菌镜检诊断指南,专家们在每张幻灯片上注释,以诊断为阴性或阳性。此外,选定的图像进行了详细的注释过程,旨在确定每个图像中每个芽孢杆菌和簇的位置。结果:数据库由三个目录组成。第一个包含所有图像,按幻灯片分隔,并指示每张幻灯片的图像是负的,或者如果是正的,则表示有多少个叉。第二个目录包含为训练自动化系统选择的502张图像,其中注释了每个芽孢杆菌的位置并使用了Python代码。在模型的训练、验证和测试阶段使用的所有图像片段(正补丁和负补丁)在第三个目录中可用。结论:这个带注释的图像数据库的开发代表了结核病诊断的重大进步。通过向科学界提供高质量和可访问的资源,我们增强了现有的诊断工具并促进了自动化技术的发展。
{"title":"Image database with slides prepared by the Ziehl-Neelsen method for training automated detection and counting systems for tuberculosis bacilli.","authors":"João Victor Boechat Gomide, Thales Francisco Mota Carvalho, Élida Aparecida Leal, Lida Jouca de Assis Figueiredo, Nauhara Vieira de Castro Barroso, Júnia Pessoa Tarabal, Cláudio José Augusto","doi":"10.1117/1.JMI.12.3.034505","DOIUrl":"10.1117/1.JMI.12.3.034505","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to provide a robust dataset for training automated systems to detect tuberculosis bacilli using Ziehl-Neelsen stained slides. By making this dataset available, a critical gap in the availability of public datasets that can be used for developing and testing artificial intelligence techniques for tuberculosis diagnosis is addressed. Our rationale is grounded in the urgent need for diagnostic tools that can enhance tuberculosis diagnosis quickly and efficiently, especially in resource-limited settings.</p><p><strong>Approach: </strong>The Ziehl-Neelsen method was used to prepare 362 slides, which were manually read. According to the World Health Organization's guidelines for performing bacilloscopy for tuberculosis diagnosis, experts annotated each slide to diagnose it as negative or positive. In addition, selected images underwent a detailed annotation process aimed at pinpointing the location of each bacillus and cluster within each image.</p><p><strong>Results: </strong>The database consists of three directories. The first contains all the images, separated by slide, and indicates whether it is negative or the number of crosses if positive, for each slide. The second directory contains the 502 images selected for training automated systems, with each bacillus's position annotated and the Python code used. All the image fragments (positive and negative patches) used in the models' training, validation, and testing stages are available in the third directory.</p><p><strong>Conclusions: </strong>The development of this annotated image database represents a significant advancement in tuberculosis diagnosis. By providing a high-quality and accessible resource to the scientific community, we enhance existing diagnostic tools and facilitate the development of automated technologies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034505"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12163626/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-MedUS: a foundational model for universal ultrasound image segmentation. SAM-MedUS:通用超声图像分割的基础模型。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-02-27 DOI: 10.1117/1.JMI.12.2.027001
Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou

Purpose: Segmentation of ultrasound images for medical diagnosis, monitoring, and research is crucial, and although existing methods perform well, they are limited by specific organs, tumors, and image devices. Applications of the Segment Anything Model (SAM), such as SAM-med2d, use a large number of medical datasets that contain only a small fraction of the ultrasound medical images.

Approach: In this work, we proposed a SAM-MedUS model for generic ultrasound image segmentation that utilizes the latest publicly available ultrasound image dataset to create a diverse dataset containing eight site categories for training and testing. We integrated ConvNext V2 and CM blocks in the encoder for better global context extraction. In addition, a boundary loss function is used to improve the segmentation of fuzzy boundaries and low-contrast ultrasound images.

Results: Experimental results show that SAM-MedUS outperforms recent methods on multiple ultrasound datasets. For the more easily datasets such as the adult kidney, it achieves 87.93% IoU and 93.58% dice, whereas for more complex ones such as the infant vein, IoU and dice reach 62.31% and 78.93%, respectively.

Conclusions: We collected and collated an ultrasound dataset of multiple different site types to achieve uniform segmentation of ultrasound images. In addition, the use of additional auxiliary branches ConvNext V2 and CM block enhances the ability of the model to extract global information and the use of boundary loss allows the model to exhibit robust performance and excellent generalization ability.

目的:超声图像分割用于医学诊断、监测和研究是至关重要的,尽管现有的方法表现良好,但它们受到特定器官、肿瘤和图像设备的限制。分段任意模型(SAM)的应用,如SAM-med2d,使用了大量的医学数据集,而这些数据集只包含一小部分超声医学图像。方法:在这项工作中,我们提出了一种用于通用超声图像分割的SAM-MedUS模型,该模型利用最新的公开超声图像数据集来创建包含八个站点类别的多样化数据集,用于训练和测试。我们在编码器中集成了ConvNext V2和CM块,以便更好地提取全局上下文。此外,采用边界损失函数对模糊边界和低对比度超声图像进行分割。结果:实验结果表明,SAM-MedUS在多种超声数据集上优于现有方法。对于较为简单的数据集,如成人肾脏,IoU达到87.93%,dice达到93.58%,而对于较为复杂的数据集,如婴儿静脉,IoU和dice分别达到62.31%和78.93%。结论:我们收集和整理了多个不同部位类型的超声数据集,实现了超声图像的均匀分割。此外,使用额外的辅助分支ConvNext V2和CM块增强了模型提取全局信息的能力,使用边界损失使模型表现出鲁棒性和出色的泛化能力。
{"title":"SAM-MedUS: a foundational model for universal ultrasound image segmentation.","authors":"Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou","doi":"10.1117/1.JMI.12.2.027001","DOIUrl":"10.1117/1.JMI.12.2.027001","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of ultrasound images for medical diagnosis, monitoring, and research is crucial, and although existing methods perform well, they are limited by specific organs, tumors, and image devices. Applications of the Segment Anything Model (SAM), such as SAM-med2d, use a large number of medical datasets that contain only a small fraction of the ultrasound medical images.</p><p><strong>Approach: </strong>In this work, we proposed a SAM-MedUS model for generic ultrasound image segmentation that utilizes the latest publicly available ultrasound image dataset to create a diverse dataset containing eight site categories for training and testing. We integrated ConvNext V2 and CM blocks in the encoder for better global context extraction. In addition, a boundary loss function is used to improve the segmentation of fuzzy boundaries and low-contrast ultrasound images.</p><p><strong>Results: </strong>Experimental results show that SAM-MedUS outperforms recent methods on multiple ultrasound datasets. For the more easily datasets such as the adult kidney, it achieves 87.93% IoU and 93.58% dice, whereas for more complex ones such as the infant vein, IoU and dice reach 62.31% and 78.93%, respectively.</p><p><strong>Conclusions: </strong>We collected and collated an ultrasound dataset of multiple different site types to achieve uniform segmentation of ultrasound images. In addition, the use of additional auxiliary branches ConvNext V2 and CM block enhances the ability of the model to extract global information and the use of boundary loss allows the model to exhibit robust performance and excellent generalization ability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"027001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed and networked analysis of volumetric image data for remote collaboration of microscopy image analysis. 用于显微图像分析远程协作的体积图像数据的分布式和网络化分析。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-11 DOI: 10.1117/1.JMI.12.2.024001
Alain Chen, Shuo Han, Soonam Lee, Chichen Fu, Changye Yang, Liming Wu, Seth Winfree, Kenneth W Dunn, Paul Salama, Edward J Delp

Purpose: The advancement of high-content optical microscopy has enabled the acquisition of very large three-dimensional (3D) image datasets. The analysis of these image volumes requires more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based 3D image processing system. The distributed and networked analysis of volumetric image data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists.

Approach: We present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis. DINAVID is designed using open-source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis and a 3D visualization system.

Results: DINAVID is a network-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID enables the image access model of a center hosting image volumes and remote users analyzing those volumes, without the need for remote users to manage any computational resources.

Conclusions: The DINAVID system, designed and developed using open-source tools, enables biologists to analyze and visualize 3D microscopy volumes remotely without the need to manage computational resources. DINAVID also provides several image analysis tools, including pre-processing and several segmentation models.

目的:高含量光学显微镜技术的进步使得获得非常大的三维(3D)图像数据集成为可能。这些图像卷的分析需要比生物学家在典型的台式机或笔记本电脑上可以访问的更多的计算资源。如果机器学习工具被用于图像分析,这一点尤其正确。随着数据分析量和计算复杂度的增加,需要一种更易于访问、易于使用和高效的基于网络的3D图像处理系统。开发了分布式和网络化的体积图像数据分析(DINAVID)系统,使生物学家能够远程分析3D显微镜图像。方法:我们介绍了DINAVID系统的概述,并将其与目前可用于显微镜图像分析的其他工具进行比较。DINAVID是使用开源工具设计的,有两个主要子系统,一个是用于3D显微镜图像处理和分析的计算系统,一个是3D可视化系统。结果:DINAVID是一个基于网络的系统,具有简单的网络界面,允许生物学家上传3D卷进行分析和可视化。DINAVID支持中心托管映像卷和远程用户分析这些卷的映像访问模型,而不需要远程用户管理任何计算资源。结论:使用开源工具设计和开发的DINAVID系统使生物学家能够远程分析和可视化3D显微镜卷,而无需管理计算资源。DINAVID还提供了几种图像分析工具,包括预处理和几种分割模型。
{"title":"Distributed and networked analysis of volumetric image data for remote collaboration of microscopy image analysis.","authors":"Alain Chen, Shuo Han, Soonam Lee, Chichen Fu, Changye Yang, Liming Wu, Seth Winfree, Kenneth W Dunn, Paul Salama, Edward J Delp","doi":"10.1117/1.JMI.12.2.024001","DOIUrl":"10.1117/1.JMI.12.2.024001","url":null,"abstract":"<p><strong>Purpose: </strong>The advancement of high-content optical microscopy has enabled the acquisition of very large three-dimensional (3D) image datasets. The analysis of these image volumes requires more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based 3D image processing system. The distributed and networked analysis of volumetric image data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists.</p><p><strong>Approach: </strong>We present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis. DINAVID is designed using open-source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis and a 3D visualization system.</p><p><strong>Results: </strong>DINAVID is a network-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID enables the image access model of a center hosting image volumes and remote users analyzing those volumes, without the need for remote users to manage any computational resources.</p><p><strong>Conclusions: </strong>The DINAVID system, designed and developed using open-source tools, enables biologists to analyze and visualize 3D microscopy volumes remotely without the need to manage computational resources. DINAVID also provides several image analysis tools, including pre-processing and several segmentation models.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11895998/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of brain metastasis progression after stereotactic radiosurgery: sensitivity to changing the definition of progression. 立体定向放射手术后脑转移进展的预测:对改变进展定义的敏感性。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-04-08 DOI: 10.1117/1.JMI.12.2.024504
Robert Policelli, David DeVries, Joanna Laba, Andrew Leung, Terence Tang, Ali Albweady, Ghada Alqaidy, Aaron D Ward

Purpose: Machine learning (ML) has been used to predict tumor progression post-stereotactic radiosurgery (SRS) based on pre-treatment MRI for brain metastasis (BM) patients, but there is variability in the definition of what constitutes progression. We aim to measure the magnitude of the change of performance of an ML model predicting post-SRS progression when various definitions of progression were used.

Approach: We collected pre- and post-SRS contrast-enhanced T1-weighted MRI scans from 62 BM patients ( n = 115 BMs). We trained a random decision forest model using radiomic features extracted from pre-SRS scans to predict progression versus non-progression for each BM. We varied the definition of progression by changing (1) the follow-up period ( < 9 , < 12 , < 15 , < 18 , or < 24 months); (2) the size change metric denoting progression ( 10 % , 15 % , 20 % , or 25 % in volume) or response assessment in neuro-oncology BM diameter ( 20 % ); and (3) whether BMs with treatment-related size changes (TRSCs) (pseudo-progression and/or radiation-necrosis) were labeled as progression. We measured performance using the area under the receiver operating characteristic curve (AUC).

Results: When we varied the follow-up period, size change metric, and TRSC labeling, the AUCs had ranges of 0.06 (0.69 to 0.75), 0.06 (0.69 to 0.75), and 0.08 (0.69 to 0.77), respectively. Radiomic feature importance remained similar.

Conclusions: Variability in the definition of BM progression has a measurable impact on the performance of an MRI radiomic-based ML model predicting post-SRS progression. A consistent, clinically relevant definition of post-SRS progression across studies would enable robust comparison of proposed ML systems, thereby accelerating progress in this field.

目的:机器学习(ML)已被用于基于脑转移(BM)患者的治疗前MRI预测立体定向放射手术(SRS)后的肿瘤进展,但在构成进展的定义上存在差异。我们的目标是在使用不同的进展定义时,测量预测srs后进展的ML模型的性能变化幅度。方法:我们收集了62例脑卒中患者(n = 115例脑卒中)的srs前后对比增强t1加权MRI扫描。我们使用从预srs扫描中提取的放射学特征训练了一个随机决策森林模型,以预测每个BM的进展与非进展。我们通过改变(1)随访时间(9、12、15、18或24个月)来改变进展的定义;(2)表示进展的大小变化指标(体积≥10%,≥15%,≥20%或≥25%)或神经肿瘤学BM直径的反应评估(≥20%);(3)伴有治疗相关大小改变(TRSCs)的脑转移瘤(假性进展和/或放射性坏死)是否被标记为进展。我们使用接收器工作特性曲线(AUC)下的面积来测量性能。结果:当我们改变随访时间、尺寸变化度量和TRSC标记时,auc的范围分别为0.06(0.69 ~ 0.75)、0.06(0.69 ~ 0.75)和0.08(0.69 ~ 0.77)。放射学特征的重要性保持相似。结论:BM进展定义的可变性对基于MRI放射学的ML模型预测srs后进展的性能具有可测量的影响。在所有研究中对srs后进展的一致的、临床相关的定义将使所提出的ML系统进行稳健的比较,从而加速该领域的进展。
{"title":"Prediction of brain metastasis progression after stereotactic radiosurgery: sensitivity to changing the definition of progression.","authors":"Robert Policelli, David DeVries, Joanna Laba, Andrew Leung, Terence Tang, Ali Albweady, Ghada Alqaidy, Aaron D Ward","doi":"10.1117/1.JMI.12.2.024504","DOIUrl":"10.1117/1.JMI.12.2.024504","url":null,"abstract":"<p><strong>Purpose: </strong>Machine learning (ML) has been used to predict tumor progression post-stereotactic radiosurgery (SRS) based on pre-treatment MRI for brain metastasis (BM) patients, but there is variability in the definition of what constitutes progression. We aim to measure the magnitude of the change of performance of an ML model predicting post-SRS progression when various definitions of progression were used.</p><p><strong>Approach: </strong>We collected pre- and post-SRS contrast-enhanced T1-weighted MRI scans from 62 BM patients ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>115</mn></mrow> </math> BMs). We trained a random decision forest model using radiomic features extracted from pre-SRS scans to predict progression versus non-progression for each BM. We varied the definition of progression by changing (1) the follow-up period ( <math><mrow><mo><</mo> <mn>9</mn></mrow> </math> , <math><mrow><mo><</mo> <mn>12</mn></mrow> </math> , <math><mrow><mo><</mo> <mn>15</mn></mrow> </math> , <math><mrow><mo><</mo> <mn>18</mn></mrow> </math> , or <math><mrow><mo><</mo> <mn>24</mn></mrow> </math> months); (2) the size change metric denoting progression ( <math><mrow><mo>≥</mo> <mn>10</mn> <mo>%</mo></mrow> </math> , <math><mrow><mo>≥</mo> <mn>15</mn> <mo>%</mo></mrow> </math> , <math><mrow><mo>≥</mo> <mn>20</mn> <mo>%</mo></mrow> </math> , or <math><mrow><mo>≥</mo> <mn>25</mn> <mo>%</mo></mrow> </math> in volume) or response assessment in neuro-oncology BM diameter ( <math><mrow><mo>≥</mo> <mn>20</mn> <mo>%</mo></mrow> </math> ); and (3) whether BMs with treatment-related size changes (TRSCs) (pseudo-progression and/or radiation-necrosis) were labeled as progression. We measured performance using the area under the receiver operating characteristic curve (AUC).</p><p><strong>Results: </strong>When we varied the follow-up period, size change metric, and TRSC labeling, the AUCs had ranges of 0.06 (0.69 to 0.75), 0.06 (0.69 to 0.75), and 0.08 (0.69 to 0.77), respectively. Radiomic feature importance remained similar.</p><p><strong>Conclusions: </strong>Variability in the definition of BM progression has a measurable impact on the performance of an MRI radiomic-based ML model predicting post-SRS progression. A consistent, clinically relevant definition of post-SRS progression across studies would enable robust comparison of proposed ML systems, thereby accelerating progress in this field.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024504"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11978467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144004595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bidirectional teaching between lightweight multi-view networks for intestine segmentation from CT volume. 基于CT体积的小肠分割轻量级多视图网络双向教学。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-31 DOI: 10.1117/1.JMI.12.2.024003
Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori

Purpose: We present a semi-supervised method for intestine segmentation to assist clinicians in diagnosing intestinal diseases. Accurate segmentation is essential for planning treatments for conditions such as intestinal obstruction. Although fully supervised learning performs well with abundant labeled data, the complexity of the intestine's spatial structure makes labeling time-intensive, resulting in limited labeled data. We propose a 3D segmentation network with a bidirectional teaching strategy to enhance segmentation accuracy using this limited dataset.

Method: The proposed semi-supervised method segments the intestine from computed tomography (CT) volumes using bidirectional teaching, where two backbones with different initial weights are trained simultaneously to generate pseudo-labels and employ unlabeled data, mitigating the challenge of limited labeled data. Intestine segmentation is further complicated by complex spatial features. To address this, we propose a lightweight multi-view symmetric network, which uses small-sized convolutional kernels instead of large ones to reduce parameters and capture multi-scale features from diverse perceptual fields, enhancing learning ability.

Results: We evaluated the proposed method with 59 CT volumes and repeated all experiments five times. Experimental results showed that the average Dice of the proposed method was 80.45%, the average precision was 84.12%, and the average recall was 78.84%.

Conclusions: The proposed method can effectively utilize large-scale unlabeled data with pseudo-labels, which is crucial in reducing the effect of limited labeled data in medical image segmentation. Furthermore, we assign different weights to the pseudo-labels to improve their reliability. From the result, we can see that the method produced competitive performance compared with previous methods.

目的:我们提出一种半监督的肠道分割方法,以协助临床医生诊断肠道疾病。准确的分割对于肠梗阻等疾病的治疗计划至关重要。虽然完全监督学习在标记数据丰富的情况下表现良好,但肠道空间结构的复杂性使得标记时间密集,导致标记数据有限。我们提出了一个具有双向教学策略的三维分割网络,以提高使用有限数据集的分割精度。方法:提出的半监督方法使用双向教学从计算机断层扫描(CT)体积中分割肠道,其中同时训练具有不同初始权值的两个骨干以生成伪标签并使用未标记的数据,从而减轻了有限标记数据的挑战。复杂的空间特征使肠道分割更加复杂。为了解决这个问题,我们提出了一种轻量级的多视图对称网络,它使用小尺寸的卷积核而不是大的卷积核来减少参数并从不同的感知领域捕获多尺度特征,增强了学习能力。结果:我们用59个CT体积评估了所提出的方法,并重复了所有实验5次。实验结果表明,该方法的平均准确率为80.45%,平均准确率为84.12%,平均召回率为78.84%。结论:本文提出的方法能够有效利用带有伪标签的大规模未标记数据,这对于减少有限标记数据对医学图像分割的影响至关重要。此外,我们为伪标签分配不同的权重,以提高它们的可靠性。从结果可以看出,与以往的方法相比,该方法产生了具有竞争力的性能。
{"title":"Bidirectional teaching between lightweight multi-view networks for intestine segmentation from CT volume.","authors":"Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori","doi":"10.1117/1.JMI.12.2.024003","DOIUrl":"10.1117/1.JMI.12.2.024003","url":null,"abstract":"<p><strong>Purpose: </strong>We present a semi-supervised method for intestine segmentation to assist clinicians in diagnosing intestinal diseases. Accurate segmentation is essential for planning treatments for conditions such as intestinal obstruction. Although fully supervised learning performs well with abundant labeled data, the complexity of the intestine's spatial structure makes labeling time-intensive, resulting in limited labeled data. We propose a 3D segmentation network with a bidirectional teaching strategy to enhance segmentation accuracy using this limited dataset.</p><p><strong>Method: </strong>The proposed semi-supervised method segments the intestine from computed tomography (CT) volumes using bidirectional teaching, where two backbones with different initial weights are trained simultaneously to generate pseudo-labels and employ unlabeled data, mitigating the challenge of limited labeled data. Intestine segmentation is further complicated by complex spatial features. To address this, we propose a lightweight multi-view symmetric network, which uses small-sized convolutional kernels instead of large ones to reduce parameters and capture multi-scale features from diverse perceptual fields, enhancing learning ability.</p><p><strong>Results: </strong>We evaluated the proposed method with 59 CT volumes and repeated all experiments five times. Experimental results showed that the average Dice of the proposed method was 80.45%, the average precision was 84.12%, and the average recall was 78.84%.</p><p><strong>Conclusions: </strong>The proposed method can effectively utilize large-scale unlabeled data with pseudo-labels, which is crucial in reducing the effect of limited labeled data in medical image segmentation. Furthermore, we assign different weights to the pseudo-labels to improve their reliability. From the result, we can see that the method produced competitive performance compared with previous methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024003"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11957399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a fully automated, quantitative fissure integrity score extracted from chest CT scans of emphysema patients to predict endobronchial valve response. 使用从肺气肿患者的胸部CT扫描中提取的全自动定量裂缝完整性评分来预测支气管内瓣膜反应。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-13 DOI: 10.1117/1.JMI.12.2.024501
Dallas K Tada, Grace H Kim, Jonathan G Goldin, Pangyu Teng, Kalyani Vyapari, Ashley Banola, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown

Purpose: We aim to develop and validate a prediction model using a previously developed fully automated quantitative fissure integrity score (FIS) extracted from pre-treatment CT images to identify suitable candidates for endobronchial valve (EBV) treatment.

Approach: We retrospectively collected 96 anonymized pre- and post-treatment chest computed tomography (CT) exams from patients with moderate to severe emphysema and who underwent EBV treatment. We used a previously developed fully automated, deep learning-based approach to quantitatively assess the completeness of each fissure by obtaining the FIS for each fissure from each patient's pre-treatment CT exam. The response to EBV treatment was recorded as the amount of targeted lobe volume reduction (TLVR) compared with target lobe volume prior to treatment as assessed on the pre- and post-treatment CT scans. EBV placement was considered successful with a TLVR of 350    cc . The dataset was split into a training set ( N = 58 ) and a test set ( N = 38 ) to train and validate a logistic regression model using fivefold cross-validation; the extracted FIS of each patient's targeted treatment lobe was the primary CT predictor. Using the training set, a receiver operating characteristic (ROC) curve analysis and predictive values were quantified over a range of FIS thresholds to determine an optimal cutoff value that would distinguish complete and incomplete fissures, which was used to evaluate predictive values of the test set cases.

Results: ROC analysis of the training set provided an AUC of 0.83, and the determined FIS threshold was 89.5%. Using this threshold on the test set achieved an accuracy of 81.6%, specificity (Sp) of 90.9%, sensitivity (Sn) of 77.8%, positive predictive value (PPV) of 62.5%, and negative predictive value of 95.5%.

Conclusions: A model using the quantified FIS shows potential as a predictive biomarker for whether a targeted lobe will achieve successful volume reduction from EBV treatment.

目的:我们的目标是开发和验证一个预测模型,该模型使用先前开发的全自动定量裂缝完整性评分(FIS),从预处理CT图像中提取,以确定支气管内瓣膜(EBV)治疗的合适候选者。方法:我们回顾性收集96例接受EBV治疗的中重度肺气肿患者治疗前和治疗后的匿名胸部计算机断层扫描(CT)检查。我们使用先前开发的全自动、基于深度学习的方法,通过从每位患者的治疗前CT检查中获取每个裂缝的FIS,定量评估每个裂缝的完整性。通过治疗前和治疗后的CT扫描评估,记录EBV治疗的应答,即目标肺叶体积缩小(TLVR)量与治疗前目标肺叶体积的比较。当TLVR≥350cc时,EBV放置被认为是成功的。将数据集分成训练集(N = 58)和测试集(N = 38),使用五重交叉验证来训练和验证逻辑回归模型;每个患者的目标治疗叶提取的FIS是主要的CT预测因子。使用训练集,在一系列FIS阈值上量化接收者工作特征(ROC)曲线分析和预测值,以确定区分完整和不完整裂缝的最佳截止值,该截止值用于评估测试集案例的预测值。结果:训练集ROC分析AUC为0.83,确定的FIS阈值为89.5%。在测试集上使用该阈值,准确率为81.6%,特异性(Sp)为90.9%,敏感性(Sn)为77.8%,阳性预测值(PPV)为62.5%,阴性预测值为95.5%。结论:使用量化FIS的模型显示出作为预测EBV治疗中目标肺叶是否成功减少体积的生物标志物的潜力。
{"title":"Using a fully automated, quantitative fissure integrity score extracted from chest CT scans of emphysema patients to predict endobronchial valve response.","authors":"Dallas K Tada, Grace H Kim, Jonathan G Goldin, Pangyu Teng, Kalyani Vyapari, Ashley Banola, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown","doi":"10.1117/1.JMI.12.2.024501","DOIUrl":"10.1117/1.JMI.12.2.024501","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop and validate a prediction model using a previously developed fully automated quantitative fissure integrity score (FIS) extracted from pre-treatment CT images to identify suitable candidates for endobronchial valve (EBV) treatment.</p><p><strong>Approach: </strong>We retrospectively collected 96 anonymized pre- and post-treatment chest computed tomography (CT) exams from patients with moderate to severe emphysema and who underwent EBV treatment. We used a previously developed fully automated, deep learning-based approach to quantitatively assess the completeness of each fissure by obtaining the FIS for each fissure from each patient's pre-treatment CT exam. The response to EBV treatment was recorded as the amount of targeted lobe volume reduction (TLVR) compared with target lobe volume prior to treatment as assessed on the pre- and post-treatment CT scans. EBV placement was considered successful with a TLVR of <math><mrow><mo>≥</mo> <mn>350</mn> <mtext>  </mtext> <mi>cc</mi></mrow> </math> . The dataset was split into a training set ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>58</mn></mrow> </math> ) and a test set ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>38</mn></mrow> </math> ) to train and validate a logistic regression model using fivefold cross-validation; the extracted FIS of each patient's targeted treatment lobe was the primary CT predictor. Using the training set, a receiver operating characteristic (ROC) curve analysis and predictive values were quantified over a range of FIS thresholds to determine an optimal cutoff value that would distinguish complete and incomplete fissures, which was used to evaluate predictive values of the test set cases.</p><p><strong>Results: </strong>ROC analysis of the training set provided an AUC of 0.83, and the determined FIS threshold was 89.5%. Using this threshold on the test set achieved an accuracy of 81.6%, specificity (Sp) of 90.9%, sensitivity (Sn) of 77.8%, positive predictive value (PPV) of 62.5%, and negative predictive value of 95.5%.</p><p><strong>Conclusions: </strong>A model using the quantified FIS shows potential as a predictive biomarker for whether a targeted lobe will achieve successful volume reduction from EBV treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024501"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCEF-AVNet: multi-scale feature fusion and attention mechanism-guided brain tumor segmentation network. DCEF-AVNet:多尺度特征融合和注意力机制引导的脑肿瘤分割网络。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-20 DOI: 10.1117/1.JMI.12.2.024503
Linlin Wang, Tong Zhang, Chuanyun Wang, Qian Gao, Zhongyi Li, Jing Shao

Purpose: Accurate and efficient automatic segmentation of brain tumors is critical for diagnosis and treatment. However, the diversity in the appearance, location, and shape of brain tumors and their subregions, coupled with complex boundaries, presents significant challenges. We aim to improve segmentation accuracy by addressing limitations in V-Net, including insufficient utilization of multi-scale features and difficulties in managing complex spatial relationships and long-range dependencies.

Approach: We propose an improved network structure, dynamic convolution enhanced fusion axial V-Net (DCEF-AVNet), which integrates an enhanced feature fusion module and axial attention mechanisms. The feature fusion module integrates dynamic convolution with a redesigned skip connection strategy to effectively combine multi-scale features, reducing feature inconsistencies and improving representation capability. Axial attention mechanisms are introduced at encoder-decoder connections to manage spatial relationships and alleviate long-range dependency issues. The network was evaluated using the BraTS2021 dataset, with performance measured in terms of Dice coefficients and Hausdorff distances.

Results: DCEF-AVNet achieved Dice coefficients of 92.49%, 91.35%, and 91.96% for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) regions, respectively, significantly outperforming baseline methods. The model also demonstrated robust performance across multiple runs, with consistently low standard deviations in metrics.

Conclusions: The integration of dynamic convolution, enhanced feature fusion, and axial attention mechanisms enables DCEF-AVNet to deliver superior segmentation accuracy and robustness. These results underscore its potential for advancing automated brain tumor segmentation and improving clinical decision-making.

目的:准确、高效的脑肿瘤自动分割对脑肿瘤的诊断和治疗至关重要。然而,脑肿瘤及其亚区域的外观、位置和形状的多样性,加上复杂的边界,提出了重大的挑战。我们的目标是通过解决V-Net的局限性来提高分割精度,包括对多尺度特征的利用不足以及管理复杂空间关系和远程依赖关系的困难。方法:我们提出了一种改进的网络结构,动态卷积增强融合轴向V-Net (DCEF-AVNet),它集成了一个增强的特征融合模块和轴向注意机制。特征融合模块将动态卷积与重新设计的跳跃连接策略相结合,有效地结合了多尺度特征,减少了特征不一致,提高了表征能力。在编码器-解码器连接中引入轴向注意机制,以管理空间关系和缓解远程依赖问题。该网络使用BraTS2021数据集进行评估,并根据Dice系数和Hausdorff距离来衡量性能。结果:DCEF-AVNet在全肿瘤(WT)、肿瘤核心(TC)和增强肿瘤(ET)区域的Dice系数分别为92.49%、91.35%和91.96%,显著优于基线方法。该模型在多次运行中也表现出强大的性能,在指标上具有一贯的低标准偏差。结论:动态卷积、增强特征融合和轴向注意机制的集成使DCEF-AVNet能够提供卓越的分割精度和鲁棒性。这些结果强调了它在推进自动化脑肿瘤分割和改善临床决策方面的潜力。
{"title":"DCEF-AVNet: multi-scale feature fusion and attention mechanism-guided brain tumor segmentation network.","authors":"Linlin Wang, Tong Zhang, Chuanyun Wang, Qian Gao, Zhongyi Li, Jing Shao","doi":"10.1117/1.JMI.12.2.024503","DOIUrl":"10.1117/1.JMI.12.2.024503","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and efficient automatic segmentation of brain tumors is critical for diagnosis and treatment. However, the diversity in the appearance, location, and shape of brain tumors and their subregions, coupled with complex boundaries, presents significant challenges. We aim to improve segmentation accuracy by addressing limitations in V-Net, including insufficient utilization of multi-scale features and difficulties in managing complex spatial relationships and long-range dependencies.</p><p><strong>Approach: </strong>We propose an improved network structure, dynamic convolution enhanced fusion axial V-Net (DCEF-AVNet), which integrates an enhanced feature fusion module and axial attention mechanisms. The feature fusion module integrates dynamic convolution with a redesigned skip connection strategy to effectively combine multi-scale features, reducing feature inconsistencies and improving representation capability. Axial attention mechanisms are introduced at encoder-decoder connections to manage spatial relationships and alleviate long-range dependency issues. The network was evaluated using the BraTS2021 dataset, with performance measured in terms of Dice coefficients and Hausdorff distances.</p><p><strong>Results: </strong>DCEF-AVNet achieved Dice coefficients of 92.49%, 91.35%, and 91.96% for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) regions, respectively, significantly outperforming baseline methods. The model also demonstrated robust performance across multiple runs, with consistently low standard deviations in metrics.</p><p><strong>Conclusions: </strong>The integration of dynamic convolution, enhanced feature fusion, and axial attention mechanisms enables DCEF-AVNet to deliver superior segmentation accuracy and robustness. These results underscore its potential for advancing automated brain tumor segmentation and improving clinical decision-making.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024503"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11925075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WS-SfMLearner: self-supervised monocular depth and ego-motion estimation on surgical videos with unknown camera parameters. WS-SfMLearner:对摄像机参数未知的手术视频进行自监督单目深度和自运动估计。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-04-30 DOI: 10.1117/1.JMI.12.2.025003
Ange Lou, Jack Noble

Purpose: Accurate depth estimation in surgical videos is a pivotal component of numerous image-guided surgery procedures. However, creating ground truth depth maps for surgical videos is often infeasible due to challenges such as inconsistent illumination and sensor noise. As a result, self-supervised depth and ego-motion estimation frameworks are gaining traction, eliminating the need for manually annotated depth maps. Despite the progress, current self-supervised methods still rely on known camera intrinsic parameters, which are frequently unavailable or unrecorded in surgical environments. We address this gap by introducing a self-supervised system capable of jointly predicting depth maps, camera poses, and intrinsic parameters, providing a comprehensive solution for depth estimation under such constraints.

Approach: We developed a self-supervised depth and ego-motion estimation framework, incorporating a cost volume-based auxiliary supervision module. This module provides additional supervision for predicting camera intrinsic parameters, allowing for robust estimation even without predefined intrinsics. The system was rigorously evaluated on a public dataset to assess its effectiveness in simultaneously predicting depth, camera pose, and intrinsic parameters.

Results: The experimental results demonstrated that the proposed method significantly improved the accuracy of ego-motion and depth prediction, even when compared with methods incorporating known camera intrinsics. In addition, by integrating our cost volume-based supervision, the accuracy of camera parameter estimation, including intrinsic parameters, was further enhanced.

Conclusions: We present a self-supervised system for depth, ego-motion, and intrinsic parameter estimation, effectively overcoming the limitations imposed by unknown or missing camera intrinsics. The experimental results confirm that the proposed method outperforms the baseline techniques, offering a robust solution for depth estimation in complex surgical video scenarios, with broader implications for improving image-guided surgery systems.

目的:在手术视频中准确的深度估计是许多图像引导手术程序的关键组成部分。然而,由于光照不一致和传感器噪声等挑战,为手术视频创建地面真实深度图通常是不可行的。因此,自监督深度和自我运动估计框架正在获得牵引力,消除了手动注释深度图的需要。尽管取得了进展,但目前的自我监督方法仍然依赖于已知的相机固有参数,这些参数在手术环境中经常不可用或未记录。我们通过引入一个能够联合预测深度图、相机姿态和内在参数的自监督系统来解决这一差距,为这种约束下的深度估计提供了一个全面的解决方案。方法:我们开发了一个自监督深度和自我运动估计框架,其中包含一个基于成本量的辅助监督模块。该模块为预测相机内部参数提供了额外的监督,即使没有预定义的内部参数,也可以进行鲁棒估计。该系统在一个公共数据集上进行了严格的评估,以评估其同时预测深度、相机姿势和内在参数的有效性。结果:实验结果表明,即使与包含已知相机特性的方法相比,所提出的方法也显著提高了自我运动和深度预测的准确性。此外,通过整合我们基于成本量的监督,进一步提高了摄像机参数估计的准确性,包括固有参数估计。结论:我们提出了一个深度、自我运动和固有参数估计的自监督系统,有效地克服了未知或缺失相机固有特性所带来的限制。实验结果证实,该方法优于基线技术,为复杂手术视频场景的深度估计提供了鲁棒解决方案,对改进图像引导手术系统具有更广泛的意义。
{"title":"WS-SfMLearner: self-supervised monocular depth and ego-motion estimation on surgical videos with unknown camera parameters.","authors":"Ange Lou, Jack Noble","doi":"10.1117/1.JMI.12.2.025003","DOIUrl":"10.1117/1.JMI.12.2.025003","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate depth estimation in surgical videos is a pivotal component of numerous image-guided surgery procedures. However, creating ground truth depth maps for surgical videos is often infeasible due to challenges such as inconsistent illumination and sensor noise. As a result, self-supervised depth and ego-motion estimation frameworks are gaining traction, eliminating the need for manually annotated depth maps. Despite the progress, current self-supervised methods still rely on known camera intrinsic parameters, which are frequently unavailable or unrecorded in surgical environments. We address this gap by introducing a self-supervised system capable of jointly predicting depth maps, camera poses, and intrinsic parameters, providing a comprehensive solution for depth estimation under such constraints.</p><p><strong>Approach: </strong>We developed a self-supervised depth and ego-motion estimation framework, incorporating a cost volume-based auxiliary supervision module. This module provides additional supervision for predicting camera intrinsic parameters, allowing for robust estimation even without predefined intrinsics. The system was rigorously evaluated on a public dataset to assess its effectiveness in simultaneously predicting depth, camera pose, and intrinsic parameters.</p><p><strong>Results: </strong>The experimental results demonstrated that the proposed method significantly improved the accuracy of ego-motion and depth prediction, even when compared with methods incorporating known camera intrinsics. In addition, by integrating our cost volume-based supervision, the accuracy of camera parameter estimation, including intrinsic parameters, was further enhanced.</p><p><strong>Conclusions: </strong>We present a self-supervised system for depth, ego-motion, and intrinsic parameter estimation, effectively overcoming the limitations imposed by unknown or missing camera intrinsics. The experimental results confirm that the proposed method outperforms the baseline techniques, offering a robust solution for depth estimation in complex surgical video scenarios, with broader implications for improving image-guided surgery systems.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"025003"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Growth, New Opportunities. 新的增长,新的机遇。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-04-26 DOI: 10.1117/1.JMI.12.2.020101
Bennett A Landman

JMI Editor-in-Chief Bennett Landman discusses special issues and offers a few thoughts on the use of AI-assisted writing.

JMI总编辑Bennett Landman讨论了一些特殊问题,并提供了一些关于使用人工智能辅助写作的想法。
{"title":"New Growth, New Opportunities.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.2.020101","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.020101","url":null,"abstract":"<p><p>JMI Editor-in-Chief Bennett Landman discusses special issues and offers a few thoughts on the use of AI-assisted writing.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"020101"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12032759/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144044551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-contrast computed tomography atlas of healthy pancreas with dense displacement sampling registration. 健康胰腺密集位移采样配准的多层对比计算机断层图谱。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-04-17 DOI: 10.1117/1.JMI.12.2.024006
Yinchi Zhou, Ho Hin Lee, Yucheng Tang, Xin Yu, Qi Yang, Michael E Kim, Lucas W Remedios, Shunxing Bao, Jeffrey M Spraggins, Yuankai Huo, Bennett A Landman

Purpose: Diverse population demographics can lead to substantial variation in the human anatomy. Therefore, standard anatomical atlases are needed for interpreting organ-specific analyses. Among abdominal organs, the pancreas exhibits notable variability in volumetric morphology, shape, and appearance, complicating the generalization of population-wide features. Understanding the common features of a healthy pancreas is crucial for identifying biomarkers and diagnosing pancreatic diseases.

Approach: We propose a high-resolution CT atlas framework optimized for the healthy pancreas. We introduce a deep-learning-based preprocessing technique to extract abdominal ROIs and leverage a hierarchical registration pipeline to align pancreatic anatomy across populations. Briefly, DEEDS affine and non-rigid registration techniques are employed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas, multi-phase contrast CT scans of 443 subjects (aged 15 to 50 years, with no reported history of pancreatic disease) were processed.

Results: The two-stage DEEDS affine and non-rigid registration outperforms other state-of-the-art tools, achieving the highest scores for pancreas label transfer across all phases (non-contrast: 0.497, arterial: 0.505, portal venous: 0.494, delayed: 0.497). External evaluation with 100 portal venous scans and 13 labeled abdominal organs shows a mean Dice score of 0.504. The low variance between the pancreases of registered subjects and the obtained pancreas atlas further illustrates the generalizability of the proposed method.

Conclusion: We introduce a high-resolution pancreas atlas framework to generalize healthy biomarkers across populations with multi-contrast abdominal CT. The atlases and the associated pancreas organ labels are publicly available through the Human Biomolecular Atlas Program (HuBMAP).

目的:不同的人口统计数据可能导致人体解剖结构的实质性变化。因此,需要标准的解剖图谱来解释器官特异性分析。在腹部器官中,胰腺在体积形态、形状和外观上表现出显著的可变性,使人群特征的普遍化复杂化。了解健康胰腺的共同特征对于识别生物标志物和诊断胰腺疾病至关重要。方法:我们提出了一个针对健康胰腺优化的高分辨率CT图谱框架。我们引入了一种基于深度学习的预处理技术来提取腹部roi,并利用分层配准管道来对齐不同人群的胰腺解剖结构。简而言之,采用DEEDS仿射和非刚性注册技术将患者腹部体积转移到固定的高分辨率图谱模板上。为了生成和评估胰腺图谱,对443名受试者(年龄在15至50岁之间,无胰腺疾病史)的多期对比CT扫描进行了处理。结果:两阶段的DEEDS仿射和非刚性注册优于其他最先进的工具,在所有阶段获得胰腺标签转移的最高分(非对比:0.497,动脉:0.505,门静脉:0.494,延迟:0.497)。通过100次门静脉扫描和13个标记的腹部器官进行外部评估,平均Dice评分为0.504。注册受试者的胰腺与获得的胰腺图谱之间的低方差进一步说明了所提出方法的普遍性。结论:我们引入了一个高分辨率胰腺图谱框架,通过多层对比腹部CT在人群中推广健康生物标志物。图集和相关的胰腺器官标签可通过人类生物分子图集计划(HuBMAP)公开获取。
{"title":"Multi-contrast computed tomography atlas of healthy pancreas with dense displacement sampling registration.","authors":"Yinchi Zhou, Ho Hin Lee, Yucheng Tang, Xin Yu, Qi Yang, Michael E Kim, Lucas W Remedios, Shunxing Bao, Jeffrey M Spraggins, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.12.2.024006","DOIUrl":"10.1117/1.JMI.12.2.024006","url":null,"abstract":"<p><strong>Purpose: </strong>Diverse population demographics can lead to substantial variation in the human anatomy. Therefore, standard anatomical atlases are needed for interpreting organ-specific analyses. Among abdominal organs, the pancreas exhibits notable variability in volumetric morphology, shape, and appearance, complicating the generalization of population-wide features. Understanding the common features of a healthy pancreas is crucial for identifying biomarkers and diagnosing pancreatic diseases.</p><p><strong>Approach: </strong>We propose a high-resolution CT atlas framework optimized for the healthy pancreas. We introduce a deep-learning-based preprocessing technique to extract abdominal ROIs and leverage a hierarchical registration pipeline to align pancreatic anatomy across populations. Briefly, DEEDS affine and non-rigid registration techniques are employed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas, multi-phase contrast CT scans of 443 subjects (aged 15 to 50 years, with no reported history of pancreatic disease) were processed.</p><p><strong>Results: </strong>The two-stage DEEDS affine and non-rigid registration outperforms other state-of-the-art tools, achieving the highest scores for pancreas label transfer across all phases (non-contrast: 0.497, arterial: 0.505, portal venous: 0.494, delayed: 0.497). External evaluation with 100 portal venous scans and 13 labeled abdominal organs shows a mean Dice score of 0.504. The low variance between the pancreases of registered subjects and the obtained pancreas atlas further illustrates the generalizability of the proposed method.</p><p><strong>Conclusion: </strong>We introduce a high-resolution pancreas atlas framework to generalize healthy biomarkers across populations with multi-contrast abdominal CT. The atlases and the associated pancreas organ labels are publicly available through the Human Biomolecular Atlas Program (HuBMAP).</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024006"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12005954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1