首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
ZeroReg3D: a zero-shot registration pipeline for 3D consecutive histopathology image reconstruction. ZeroReg3D:用于三维连续组织病理学图像重建的零镜头配准管道。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-08-05 DOI: 10.1117/1.JMI.12.4.044002
Juming Xiong, Ruining Deng, Jialin Yue, Siqi Lu, Junlin Guo, Marilyn Lionts, Tianyuan Yao, Can Cui, Junchao Zhu, Chongyu Qu, Yuechen Yang, Mengmeng Yin, Haichun Yang, Yuankai Huo

Purpose: Histological analysis plays a crucial role in understanding tissue structure and pathology. Although recent advancements in registration methods have improved 2D histological analysis, they often struggle to preserve critical 3D spatial relationships, limiting their utility in both clinical and research applications. Specifically, constructing accurate 3D models from 2D slices remains challenging due to tissue deformation, sectioning artifacts, variability in imaging techniques, and inconsistent illumination. Deep learning-based registration methods have demonstrated improved performance but suffer from limited generalizability and require large-scale training data. In contrast, non-deep-learning approaches offer better generalizability but often compromise on accuracy.

Approach: We introduce ZeroReg3D, a zero-shot registration pipeline that integrates zero-shot deep learning-based keypoint matching and non-deep-learning registration techniques to effectively mitigate deformation and sectioning artifacts without requiring extensive training data.

Results: Comprehensive evaluations demonstrate that our pairwise 2D image registration method improves registration accuracy by 10 % over baseline methods, outperforming existing strategies in both accuracy and robustness. High-fidelity 3D reconstructions further validate the effectiveness of our approach, establishing ZeroReg3D as a reliable framework for precise 3D reconstruction from consecutive 2D histological images.

Conclusions: We introduced ZeroReg3D, a zero-shot registration pipeline tailored for accurate 3D reconstruction from serial histological sections. By combining zero-shot deep learning-based keypoint matching with optimization-based affine and non-rigid registration techniques, ZeroReg3D effectively addresses critical challenges such as tissue deformation, sectioning artifacts, staining variability, and inconsistent illumination without requiring retraining or fine-tuning.

目的:组织学分析在了解组织结构和病理方面起着重要作用。尽管最近注册方法的进步改善了二维组织学分析,但它们往往难以保持关键的三维空间关系,限制了它们在临床和研究应用中的效用。具体来说,由于组织变形、切片伪影、成像技术的可变性和光照不一致,从2D切片构建准确的3D模型仍然具有挑战性。基于深度学习的配准方法已经证明了性能的提高,但泛化能力有限,并且需要大规模的训练数据。相比之下,非深度学习方法提供了更好的泛化性,但往往在准确性上有所妥协。方法:我们引入了ZeroReg3D,这是一种零拍摄配准管道,集成了基于零拍摄深度学习的关键点匹配和非深度学习配准技术,可以有效地减轻变形和切片工件,而不需要大量的训练数据。结果:综合评估表明,我们的两两二维图像配准方法比基线方法提高了配准精度约10%,在准确性和鲁棒性方面都优于现有策略。高保真三维重建进一步验证了我们方法的有效性,建立了ZeroReg3D作为从连续二维组织学图像进行精确三维重建的可靠框架。结论:我们引入了ZeroReg3D,这是一种专为连续组织学切片精确三维重建而定制的零射击配准管道。通过将基于零射击深度学习的关键点匹配与基于优化的仿射和非刚性配准技术相结合,ZeroReg3D有效地解决了组织变形、切片伪影、染色变异性和光照不一致等关键挑战,而无需重新训练或微调。
{"title":"ZeroReg3D: a zero-shot registration pipeline for 3D consecutive histopathology image reconstruction.","authors":"Juming Xiong, Ruining Deng, Jialin Yue, Siqi Lu, Junlin Guo, Marilyn Lionts, Tianyuan Yao, Can Cui, Junchao Zhu, Chongyu Qu, Yuechen Yang, Mengmeng Yin, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.4.044002","DOIUrl":"10.1117/1.JMI.12.4.044002","url":null,"abstract":"<p><strong>Purpose: </strong>Histological analysis plays a crucial role in understanding tissue structure and pathology. Although recent advancements in registration methods have improved 2D histological analysis, they often struggle to preserve critical 3D spatial relationships, limiting their utility in both clinical and research applications. Specifically, constructing accurate 3D models from 2D slices remains challenging due to tissue deformation, sectioning artifacts, variability in imaging techniques, and inconsistent illumination. Deep learning-based registration methods have demonstrated improved performance but suffer from limited generalizability and require large-scale training data. In contrast, non-deep-learning approaches offer better generalizability but often compromise on accuracy.</p><p><strong>Approach: </strong>We introduce ZeroReg3D, a zero-shot registration pipeline that integrates zero-shot deep learning-based keypoint matching and non-deep-learning registration techniques to effectively mitigate deformation and sectioning artifacts without requiring extensive training data.</p><p><strong>Results: </strong>Comprehensive evaluations demonstrate that our pairwise 2D image registration method improves registration accuracy by <math><mrow><mo>∼</mo> <mn>10</mn> <mo>%</mo></mrow> </math> over baseline methods, outperforming existing strategies in both accuracy and robustness. High-fidelity 3D reconstructions further validate the effectiveness of our approach, establishing ZeroReg3D as a reliable framework for precise 3D reconstruction from consecutive 2D histological images.</p><p><strong>Conclusions: </strong>We introduced ZeroReg3D, a zero-shot registration pipeline tailored for accurate 3D reconstruction from serial histological sections. By combining zero-shot deep learning-based keypoint matching with optimization-based affine and non-rigid registration techniques, ZeroReg3D effectively addresses critical challenges such as tissue deformation, sectioning artifacts, staining variability, and inconsistent illumination without requiring retraining or fine-tuning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044002"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12322837/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JMI's Special Issues and Shared Journeys. JMI的特别议题和共同旅程。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-08-29 DOI: 10.1117/1.JMI.12.4.040101
Bennett A Landman

The editorial discusses current JMI special sections/issues and calls for papers.

该社论讨论了当前JMI的特殊部分/问题和论文征集。
{"title":"JMI's Special Issues and Shared Journeys.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.4.040101","DOIUrl":"10.1117/1.JMI.12.4.040101","url":null,"abstract":"<p><p>The editorial discusses current JMI special sections/issues and calls for papers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"040101"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12395497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-based compression method for scale-preserving in VNIR and SWIR hyperspectral data. 基于小波压缩的近红外和SWIR高光谱数据尺度保持方法。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-23 DOI: 10.1117/1.JMI.12.4.044503
Hridoy Biswas, Rui Tang, Shamim Mollah, Mikhail Y Berezin

Purpose: Hyperspectral imaging (HSI) collects detailed spectral information across hundreds of narrow bands, providing valuable datasets for applications such as medical diagnostics. However, the large size of HSI datasets, often exceeding several gigabytes, creates significant challenges in data transmission, storage, and processing. We aim to develop a wavelet-based compression method that addresses these challenges while preserving the integrity and quality of spectral information.

Approach: The proposed method applies wavelet transforms to the spectral dimension of hyperspectral data in three steps: (1) wavelet transformation for dimensionality reduction, (2) spectral cropping to eliminate low-intensity bands, and (3) scale matching to maintain accurate wavelength mapping. Daubechies wavelets were used to achieve up to 32× compression while ensuring spectral fidelity and spatial feature retention.

Results: The wavelet-based method achieved up to 32× compression, corresponding to a 96.88% reduction in data size without significant loss of important data. Unlike principal component analysis and independent component analysis, the method preserved the original wavelength scale, enabling straightforward spectral interpretation. In addition, the compressed data exhibited minimal loss in spatial features, with improvements in contrast and noise reduction compared with spectral binning.

Conclusions: We demonstrate that wavelet-based compression is an effective solution for managing large HSI datasets in medical imaging. The method preserves critical spectral and spatial information and therefore facilitates efficient data storage and processing, providing a way for the practical integration of HSI technology in clinical applications.

目的:高光谱成像(HSI)收集数百个窄带的详细光谱信息,为医疗诊断等应用提供有价值的数据集。然而,大型HSI数据集(通常超过几gb)在数据传输、存储和处理方面带来了重大挑战。我们的目标是开发一种基于小波的压缩方法来解决这些挑战,同时保持光谱信息的完整性和质量。方法:该方法将小波变换应用于高光谱数据的光谱维数,分三步进行:(1)小波变换降维;(2)光谱裁剪去除低强度波段;(3)尺度匹配保持准确的波长映射。Daubechies小波用于实现高达32倍的压缩,同时确保光谱保真度和空间特征保留。结果:基于小波的方法实现了高达32倍的压缩,相当于减少了96.88%的数据大小,而没有明显的重要数据丢失。与主成分分析和独立成分分析不同,该方法保留了原始波长尺度,可以直接进行光谱解释。此外,压缩后的数据在空间特征上的损失最小,与光谱分形相比,在对比度和降噪方面有所改善。结论:我们证明基于小波的压缩是管理医学成像中大型HSI数据集的有效解决方案。该方法保留了关键的光谱和空间信息,从而促进了有效的数据存储和处理,为HSI技术在临床应用中的实际集成提供了一种方法。
{"title":"Wavelet-based compression method for scale-preserving in VNIR and SWIR hyperspectral data.","authors":"Hridoy Biswas, Rui Tang, Shamim Mollah, Mikhail Y Berezin","doi":"10.1117/1.JMI.12.4.044503","DOIUrl":"10.1117/1.JMI.12.4.044503","url":null,"abstract":"<p><strong>Purpose: </strong>Hyperspectral imaging (HSI) collects detailed spectral information across hundreds of narrow bands, providing valuable datasets for applications such as medical diagnostics. However, the large size of HSI datasets, often exceeding several gigabytes, creates significant challenges in data transmission, storage, and processing. We aim to develop a wavelet-based compression method that addresses these challenges while preserving the integrity and quality of spectral information.</p><p><strong>Approach: </strong>The proposed method applies wavelet transforms to the spectral dimension of hyperspectral data in three steps: (1) wavelet transformation for dimensionality reduction, (2) spectral cropping to eliminate low-intensity bands, and (3) scale matching to maintain accurate wavelength mapping. Daubechies wavelets were used to achieve up to 32× compression while ensuring spectral fidelity and spatial feature retention.</p><p><strong>Results: </strong>The wavelet-based method achieved up to 32× compression, corresponding to a 96.88% reduction in data size without significant loss of important data. Unlike principal component analysis and independent component analysis, the method preserved the original wavelength scale, enabling straightforward spectral interpretation. In addition, the compressed data exhibited minimal loss in spatial features, with improvements in contrast and noise reduction compared with spectral binning.</p><p><strong>Conclusions: </strong>We demonstrate that wavelet-based compression is an effective solution for managing large HSI datasets in medical imaging. The method preserves critical spectral and spatial information and therefore facilitates efficient data storage and processing, providing a way for the practical integration of HSI technology in clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044503"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12285520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physician-guided deep learning model for assessing thymic epithelial tumor volume. 医师引导的胸腺上皮肿瘤体积评估深度学习模型。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-08-13 DOI: 10.1117/1.JMI.12.4.046501
Nirmal Choradia, Nathan Lay, Alex Chen, James Latanski, Meredith McAdams, Shannon Swift, Christine Feierabend, Testi Sherif, Susan Sansone, Laercio DaSilva, James L Gulley, Arlene Sirajuddin, Stephanie Harmon, Arun Rajan, Baris Turkbey, Chen Zhao

Purpose: The Response Evaluation Criteria in Solid Tumors (RECIST) relies solely on one-dimensional measurements to evaluate tumor response to treatments. However, thymic epithelial tumors (TETs), which frequently metastasize to the pleural cavity, exhibit a curvilinear morphology that complicates accurate measurement. To address this, we developed a physician-guided deep learning model and performed a retrospective study based on a patient cohort derived from clinical trials, aiming at efficient and reproducible volumetric assessments of TETs.

Approach: We used 231 computed tomography scans comprising 572 TETs from 81 patients. Tumors within the scans were identified and manually outlined to develop a ground truth that was used to measure model performance. TETs were characterized by their general location within the chest cavity: lung parenchyma, pleura, or mediastinum. Model performance was quantified on an unseen test set of 61 scans by mask Dice similarity coefficient (DSC), tumor DSC, absolute volume difference, and relative volume difference.

Results: We included 81 patients: 47 (58.0%) had thymic carcinoma; the remaining patients had thymoma B1, B2, B2/B3, or B3. The artificial intelligence (AI) model achieved an overall DSC of 0.77 per scan when provided with boxes surrounding the tumors as identified by physicians, corresponding to a mean absolute volume difference between the AI measurement and the ground truth of 16.1    cm 3 and a mean relative volume difference of 22%.

Conclusion: We have successfully developed a robust annotation workflow and AI segmentation model for analyzing advanced TETs. The model has been integrated into the Picture Archiving and Communication System alongside RECIST measurements to enhance outcome assessments for patients with metastatic TETs.

目的:实体肿瘤反应评价标准(RECIST)仅依赖于一维测量来评估肿瘤对治疗的反应。然而,胸腺上皮肿瘤(TETs)经常转移到胸膜腔,表现出曲线形态,使精确测量复杂化。为了解决这个问题,我们开发了一个医生指导的深度学习模型,并基于来自临床试验的患者队列进行了一项回顾性研究,旨在对TETs进行有效和可重复的体积评估。方法:我们使用了231次计算机断层扫描,包括来自81名患者的572次tet。扫描中的肿瘤被识别并手动勾画出来,以建立一个用于测量模型性能的基本事实。tet的特征在于其在胸腔内的一般位置:肺实质、胸膜或纵隔。通过掩模骰子相似系数(DSC)、肿瘤DSC、绝对体积差和相对体积差对61次扫描的未见测试集的模型性能进行量化。结果:我们纳入81例患者:47例(58.0%)患有胸腺癌;其余患者为胸腺瘤B1、B2、B2/B3或B3。当提供医生识别的肿瘤周围的盒子时,人工智能(AI)模型每次扫描的总体DSC为0.77,对应于AI测量值与地面真实值之间的平均绝对体积差为16.1 cm 3,平均相对体积差为22%。结论:我们成功开发了一个鲁棒的注释工作流和AI分割模型,用于分析高级考试。该模型已与RECIST测量一起集成到图像存档和通信系统中,以增强对转移性tet患者的结果评估。
{"title":"Physician-guided deep learning model for assessing thymic epithelial tumor volume.","authors":"Nirmal Choradia, Nathan Lay, Alex Chen, James Latanski, Meredith McAdams, Shannon Swift, Christine Feierabend, Testi Sherif, Susan Sansone, Laercio DaSilva, James L Gulley, Arlene Sirajuddin, Stephanie Harmon, Arun Rajan, Baris Turkbey, Chen Zhao","doi":"10.1117/1.JMI.12.4.046501","DOIUrl":"10.1117/1.JMI.12.4.046501","url":null,"abstract":"<p><strong>Purpose: </strong>The Response Evaluation Criteria in Solid Tumors (RECIST) relies solely on one-dimensional measurements to evaluate tumor response to treatments. However, thymic epithelial tumors (TETs), which frequently metastasize to the pleural cavity, exhibit a curvilinear morphology that complicates accurate measurement. To address this, we developed a physician-guided deep learning model and performed a retrospective study based on a patient cohort derived from clinical trials, aiming at efficient and reproducible volumetric assessments of TETs.</p><p><strong>Approach: </strong>We used 231 computed tomography scans comprising 572 TETs from 81 patients. Tumors within the scans were identified and manually outlined to develop a ground truth that was used to measure model performance. TETs were characterized by their general location within the chest cavity: lung parenchyma, pleura, or mediastinum. Model performance was quantified on an unseen test set of 61 scans by mask Dice similarity coefficient (DSC), tumor DSC, absolute volume difference, and relative volume difference.</p><p><strong>Results: </strong>We included 81 patients: 47 (58.0%) had thymic carcinoma; the remaining patients had thymoma B1, B2, B2/B3, or B3. The artificial intelligence (AI) model achieved an overall DSC of 0.77 per scan when provided with boxes surrounding the tumors as identified by physicians, corresponding to a mean absolute volume difference between the AI measurement and the ground truth of <math><mrow><mn>16.1</mn> <mtext>  </mtext> <msup><mrow><mi>cm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> </mrow> </math> and a mean relative volume difference of 22%.</p><p><strong>Conclusion: </strong>We have successfully developed a robust annotation workflow and AI segmentation model for analyzing advanced TETs. The model has been integrated into the Picture Archiving and Communication System alongside RECIST measurements to enhance outcome assessments for patients with metastatic TETs.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"046501"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12344731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144849395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAFL-Attack: a targeted attack method against deep learning-based medical image segmentation models. mafl攻击:一种针对基于深度学习的医学图像分割模型的针对性攻击方法。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-16 DOI: 10.1117/1.JMI.12.4.044501
Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao

Purpose: Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.

Approach: We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.

Results: Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, L 2 , L , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.

Conclusions: The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.

目的:基于深度学习的医学图像分割在计算机辅助医学诊断中起着至关重要的作用。然而,它们仍然容易受到难以察觉的对抗性攻击,从而导致临床实践中的潜在误诊。对抗性攻击方法的研究有助于提高医学图像分割模型的鲁棒性设计。目前,针对基于深度学习的医学图像分割模型,缺乏对抗性攻击方法的研究。现有的攻击方法通常在攻击效果和对抗性示例的图像质量方面都很差,并且主要集中在非目标攻击上。为了解决这些限制并进一步研究分割模型上的对抗性攻击,我们提出了一种对抗性攻击方法。方法:提出一种动量驱动自适应特征余弦相似度低频约束攻击(maff - attack)方法。所提出的特征余弦相似度损失使用高级抽象语义信息来干扰模型对对抗性示例的理解。低频分量约束通过对低频分量的约束,保证了对抗性样本的不可感知性。此外,利用动量和动态步长计算器来增强攻击过程。结果:实验结果表明,与现有的自适应分割掩码攻击方法相比,mafl攻击生成的对抗性样本在相交/并、准确率、l2、L∞、峰值信噪比和结构相似度指标度量等评价指标上具有更好的目标攻击效果。结论:mafl攻击的设计思想启发研究者采取相应的防御措施来增强分割模型的鲁棒性。
{"title":"MAFL-Attack: a targeted attack method against deep learning-based medical image segmentation models.","authors":"Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao","doi":"10.1117/1.JMI.12.4.044501","DOIUrl":"10.1117/1.JMI.12.4.044501","url":null,"abstract":"<p><strong>Purpose: </strong>Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.</p><p><strong>Approach: </strong>We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.</p><p><strong>Results: </strong>Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mn>2</mn></mrow> </msub> </mrow> </math> , <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mo>∞</mo></mrow> </msub> </mrow> </math> , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.</p><p><strong>Conclusions: </strong>The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044501"},"PeriodicalIF":1.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12266980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LED-based, real-time, hyperspectral imaging device. 基于led,实时,高光谱成像设备。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.3.035002
Naeeme Modir, Maysam Shahedi, James Dormer, Ling Ma, Baowei Fei

Purpose: This study demonstrates the feasibility of using an LED array for hyperspectral imaging (HSI). The prototype validates the concept and provides insights into the design of future HSI applications. Our goal is to design, develop, and test a real-time, LED-based HSI prototype as a proof-of-principle device for in situ hyperspectral imaging using LEDs.

Approach: A prototype was designed based on a multiwavelength LED array and a monochrome camera and was tested to investigate the properties of the LED-based HSI. The LED array consisted of 18 LEDs in 18 different wavelengths from 405 nm to 910 nm. The performance of the imaging system was evaluated on different normal and cancerous ex vivo tissues. The impact of imaging conditions on the HSI quality was investigated. The LED-based HSI device was compared with a reference hyperspectral camera system.

Results: The hyperspectral signatures of different imaging targets were acquired using our prototype HSI device, which are comparable to the data obtained using the reference HSI system.

Conclusions: The feasibility of employing a spectral LED array as the illumination source for high-speed and high-quality HSI has been demonstrated. The use of LEDs for HSI can open the door to numerous applications in endoscopic, laparoscopic, and handheld HSI devices.

目的:本研究证明了LED阵列用于高光谱成像(HSI)的可行性。该原型验证了这一概念,并为未来HSI应用的设计提供了见解。我们的目标是设计、开发和测试一个实时的、基于led的HSI原型,作为使用led进行原位高光谱成像的原理验证设备。方法:基于多波长LED阵列和单色相机设计了一个原型,并进行了测试,以研究基于LED的HSI的特性。LED阵列由18个不同波长的LED组成,波长从405 nm到910 nm不等。在不同的正常和癌变离体组织上评估了成像系统的性能。研究了成像条件对HSI质量的影响。将基于led的HSI器件与参考高光谱相机系统进行了比较。结果:使用我们的原型HSI设备获得了不同成像目标的高光谱特征,与使用参考HSI系统获得的数据相当。结论:采用光谱LED阵列作为高速高质量HSI照明光源的可行性已经得到证明。在HSI中使用led可以为内窥镜、腹腔镜和手持式HSI设备的众多应用打开大门。
{"title":"LED-based, real-time, hyperspectral imaging device.","authors":"Naeeme Modir, Maysam Shahedi, James Dormer, Ling Ma, Baowei Fei","doi":"10.1117/1.JMI.12.3.035002","DOIUrl":"10.1117/1.JMI.12.3.035002","url":null,"abstract":"<p><strong>Purpose: </strong>This study demonstrates the feasibility of using an LED array for hyperspectral imaging (HSI). The prototype validates the concept and provides insights into the design of future HSI applications. Our goal is to design, develop, and test a real-time, LED-based HSI prototype as a proof-of-principle device for <i>in situ</i> hyperspectral imaging using LEDs.</p><p><strong>Approach: </strong>A prototype was designed based on a multiwavelength LED array and a monochrome camera and was tested to investigate the properties of the LED-based HSI. The LED array consisted of 18 LEDs in 18 different wavelengths from 405 nm to 910 nm. The performance of the imaging system was evaluated on different normal and cancerous <i>ex vivo</i> tissues. The impact of imaging conditions on the HSI quality was investigated. The LED-based HSI device was compared with a reference hyperspectral camera system.</p><p><strong>Results: </strong>The hyperspectral signatures of different imaging targets were acquired using our prototype HSI device, which are comparable to the data obtained using the reference HSI system.</p><p><strong>Conclusions: </strong>The feasibility of employing a spectral LED array as the illumination source for high-speed and high-quality HSI has been demonstrated. The use of LEDs for HSI can open the door to numerous applications in endoscopic, laparoscopic, and handheld HSI devices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035002"},"PeriodicalIF":1.7,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Summer of Ideas, Community, and Recognition. 创意、社区和认可之夏。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-06-28 DOI: 10.1117/1.JMI.12.3.030101
Bennett A Landman

The editorial celebrates emerging breakthroughs and the foundational work that continues to shape the field.

这篇社论赞扬了新兴的突破和继续塑造该领域的基础工作。
{"title":"Summer of Ideas, Community, and Recognition.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.3.030101","DOIUrl":"https://doi.org/10.1117/1.JMI.12.3.030101","url":null,"abstract":"<p><p>The editorial celebrates emerging breakthroughs and the foundational work that continues to shape the field.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"030101"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205331/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mpox lesion counting with semantic and instance segmentation methods. 基于语义和实例分割方法的Mpox病变计数。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-06-19 DOI: 10.1117/1.JMI.12.3.034506
Bohan Jiang, Andrew J McNeil, Yihao Liu, David W House, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Lori E Dodd, Edward W Cowen, Veronique Nussenblatt, Tyler Bonnett, Ziche Chen, Inga Saknite, Benoit M Dawant, Eric R Tkaczyk

Purpose: Mpox is a viral illness with symptoms similar to smallpox. A key clinical metric to monitor disease progression is the number of skin lesions. Manually counting mpox skin lesions is labor-intensive and susceptible to human error.

Approach: We previously developed an mpox lesion counting method based on the UNet segmentation model using 66 photographs from 18 patients. We have compared four additional methods: the instance segmentation methods Mask R-CNN, YOLOv8, and E2EC, in addition to a UNet++ model. We designed a patient-level leave-one-out experiment, assessing their performance using F 1 score and lesion count metrics. Finally, we tested whether an ensemble of the networks outperformed any single model.

Results: Mask R-CNN model achieved an F 1 score of 0.75, YOLOv8 a score of 0.75, E2EC a score of 0.70, UNet++ a score of 0.81, and baseline UNet a score of 0.79. Bland-Altman analysis of lesion count performance showed a limit of agreement (LoA) width of 62.2 for Mask R-CNN, 91.3 for YOLOv8, 94.2 for E2EC, and 62.1 for UNet++, with the baseline UNet model achieving 69.1. The ensemble showed an F 1 score performance of 0.78 and LoA width of 67.4.

Conclusions: Instance segmentation methods and UNet-based semantic segmentation methods performed equally well in lesion counting. Furthermore, the ensemble of the trained models showed no performance increase over the best-performing model UNet, likely because errors are frequently shared across models. Performance is likely limited by the availability of high-quality photographs for this complex problem, rather than the methodologies used.

目的:Mpox是一种病毒性疾病,其症状与天花相似。监测疾病进展的关键临床指标是皮肤病变的数量。手动计算痘皮损是一项劳动密集型工作,容易出现人为错误。方法:我们先前开发了一种基于UNet分割模型的m痘病变计数方法,使用来自18名患者的66张照片。我们比较了另外四种方法:实例分割方法Mask R-CNN, YOLOv8和E2EC,以及unnet++模型。我们设计了一个患者水平的留一实验,使用f1评分和病变计数指标评估他们的表现。最后,我们测试了网络集合是否优于任何单一模型。结果:Mask R-CNN模型f1评分为0.75,YOLOv8评分为0.75,E2EC评分为0.70,UNet++评分为0.81,基线UNet评分为0.79。Bland-Altman病灶计数性能分析显示,Mask R-CNN的LoA宽度极限为62.2,YOLOv8为91.3,E2EC为94.2,UNet++为62.1,基线UNet模型达到69.1。整体的f1得分表现为0.78,LoA宽度为67.4。结论:实例分割方法与基于unet的语义分割方法在病灶计数中的效果相当。此外,训练模型的集合没有显示出比性能最好的模型UNet性能增加,可能是因为错误经常在模型之间共享。性能可能受到这个复杂问题的高质量照片的可用性的限制,而不是所使用的方法。
{"title":"Mpox lesion counting with semantic and instance segmentation methods.","authors":"Bohan Jiang, Andrew J McNeil, Yihao Liu, David W House, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Lori E Dodd, Edward W Cowen, Veronique Nussenblatt, Tyler Bonnett, Ziche Chen, Inga Saknite, Benoit M Dawant, Eric R Tkaczyk","doi":"10.1117/1.JMI.12.3.034506","DOIUrl":"10.1117/1.JMI.12.3.034506","url":null,"abstract":"<p><strong>Purpose: </strong>Mpox is a viral illness with symptoms similar to smallpox. A key clinical metric to monitor disease progression is the number of skin lesions. Manually counting mpox skin lesions is labor-intensive and susceptible to human error.</p><p><strong>Approach: </strong>We previously developed an mpox lesion counting method based on the UNet segmentation model using 66 photographs from 18 patients. We have compared four additional methods: the instance segmentation methods Mask R-CNN, YOLOv8, and E2EC, in addition to a UNet++ model. We designed a patient-level leave-one-out experiment, assessing their performance using <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> score and lesion count metrics. Finally, we tested whether an ensemble of the networks outperformed any single model.</p><p><strong>Results: </strong>Mask R-CNN model achieved an <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> score of 0.75, YOLOv8 a score of 0.75, E2EC a score of 0.70, UNet++ a score of 0.81, and baseline UNet a score of 0.79. Bland-Altman analysis of lesion count performance showed a limit of agreement (LoA) width of 62.2 for Mask R-CNN, 91.3 for YOLOv8, 94.2 for E2EC, and 62.1 for UNet++, with the baseline UNet model achieving 69.1. The ensemble showed an <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> score performance of 0.78 and LoA width of 67.4.</p><p><strong>Conclusions: </strong>Instance segmentation methods and UNet-based semantic segmentation methods performed equally well in lesion counting. Furthermore, the ensemble of the trained models showed no performance increase over the best-performing model UNet, likely because errors are frequently shared across models. Performance is likely limited by the availability of high-quality photographs for this complex problem, rather than the methodologies used.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"034506"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based temporal MR image reconstruction for accelerated interventional imaging during in-bore biopsies. 基于深度学习的颞叶磁共振图像重建,用于管内活检期间加速介入成像。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-06-03 DOI: 10.1117/1.JMI.12.3.035001
Constant R Noordman, Steffan J W Borgers, Martijn F Boomsma, Thomas C Kwee, Marloes M G van der Lees, Christiaan G Overduin, Maarten de Rooij, Derya Yakar, Jurgen J Fütterer, Henkjan J Huisman

Purpose: Interventional MR imaging struggles with speed and efficiency. We aim to accelerate transrectal in-bore MR-guided biopsies for prostate cancer through undersampled image reconstruction and instrument localization by image segmentation.

Approach: In this single-center retrospective study, we used 8464 MR 2D multislice scans from 1289 patients undergoing a prostate biopsy to train and test a deep learning-based spatiotemporal MR image reconstruction model and a nnU-Net segmentation model. The dataset was synthetically undersampled using various undersampling rates ( R = 8 , 16, 25, 32). An annotated, unseen subset of these data was used to compare our model with a nontemporal model and readers in a reader study involving seven radiologists from three centers based in the Netherlands. We assessed a maximum noninferior undersampling rate using instrument prediction success rate and instrument tip position (ITP) error.

Results: The maximum noninferior undersampling rate is 16-times for the temporal model (ITP error: 2.28 mm, 95% CI: 1.68 to 3.31, mean difference from reference standard: 0.63 mm, P = . 09 ), whereas a nontemporal model could not produce noninferior image reconstructions comparable to our reference standard. Furthermore, the nontemporal model (ITP error: 6.27 mm, 95% CI: 3.90 to 9.07) and readers (ITP error: 6.87 mm, 95% CI: 6.38 to 7.40) had low instrument prediction success rates (46% and 60%, respectively) compared with the temporal model's 95%.

Conclusion: Deep learning-based spatiotemporal MR image reconstruction can improve time-critical intervention tasks such as instrument tracking. We found 16 times undersampling as the maximum noninferior acceleration where image quality is preserved, ITP error is minimized, and the instrument prediction success rate is maximized.

目的:介入磁共振成像的速度和效率。我们的目标是通过采样不足的图像重建和图像分割的仪器定位来加速经直肠前列腺癌的磁共振引导活检。方法:在这项单中心回顾性研究中,我们使用了1289例前列腺活检患者的8464张磁共振二维多层扫描图来训练和测试基于深度学习的时空磁共振图像重建模型和nnU-Net分割模型。使用不同的欠采样率(R = 8,16,25,32)对数据集进行综合欠采样。在一项涉及来自荷兰三个中心的七名放射科医生的读者研究中,使用这些数据的一个注释的、未见过的子集将我们的模型与非时间模型和读者进行比较。我们使用仪器预测成功率和仪器尖端位置(ITP)误差来评估最大非劣欠采样率。结果:时间模型的最大非劣欠采样率为16次(ITP误差:2.28 mm, 95% CI: 1.68 ~ 3.31,与参考标准的平均差值:0.63 mm, P =。09),而非时间模型无法产生与我们的参考标准相当的非劣质图像重建。此外,与时间模型的95%相比,非时间模型(ITP误差:6.27 mm, 95% CI: 3.90至9.07)和读取器(ITP误差:6.87 mm, 95% CI: 6.38至7.40)的仪器预测成功率较低(分别为46%和60%)。结论:基于深度学习的时空磁共振图像重建可以改善仪器跟踪等时间关键型干预任务。我们发现16次欠采样作为最大非劣等加速,在此条件下,图像质量得以保留,ITP误差最小化,仪器预测成功率最大化。
{"title":"Deep learning-based temporal MR image reconstruction for accelerated interventional imaging during in-bore biopsies.","authors":"Constant R Noordman, Steffan J W Borgers, Martijn F Boomsma, Thomas C Kwee, Marloes M G van der Lees, Christiaan G Overduin, Maarten de Rooij, Derya Yakar, Jurgen J Fütterer, Henkjan J Huisman","doi":"10.1117/1.JMI.12.3.035001","DOIUrl":"10.1117/1.JMI.12.3.035001","url":null,"abstract":"<p><strong>Purpose: </strong>Interventional MR imaging struggles with speed and efficiency. We aim to accelerate transrectal in-bore MR-guided biopsies for prostate cancer through undersampled image reconstruction and instrument localization by image segmentation.</p><p><strong>Approach: </strong>In this single-center retrospective study, we used 8464 MR 2D multislice scans from 1289 patients undergoing a prostate biopsy to train and test a deep learning-based spatiotemporal MR image reconstruction model and a nnU-Net segmentation model. The dataset was synthetically undersampled using various undersampling rates ( <math><mrow><mi>R</mi> <mo>=</mo> <mn>8</mn></mrow> </math> , 16, 25, 32). An annotated, unseen subset of these data was used to compare our model with a nontemporal model and readers in a reader study involving seven radiologists from three centers based in the Netherlands. We assessed a maximum noninferior undersampling rate using instrument prediction success rate and instrument tip position (ITP) error.</p><p><strong>Results: </strong>The maximum noninferior undersampling rate is 16-times for the temporal model (ITP error: 2.28 mm, 95% CI: 1.68 to 3.31, mean difference from reference standard: 0.63 mm, <math><mrow><mi>P</mi> <mo>=</mo> <mo>.</mo> <mn>09</mn></mrow> </math> ), whereas a nontemporal model could not produce noninferior image reconstructions comparable to our reference standard. Furthermore, the nontemporal model (ITP error: 6.27 mm, 95% CI: 3.90 to 9.07) and readers (ITP error: 6.87 mm, 95% CI: 6.38 to 7.40) had low instrument prediction success rates (46% and 60%, respectively) compared with the temporal model's 95%.</p><p><strong>Conclusion: </strong>Deep learning-based spatiotemporal MR image reconstruction can improve time-critical intervention tasks such as instrument tracking. We found 16 times undersampling as the maximum noninferior acceleration where image quality is preserved, ITP error is minimized, and the instrument prediction success rate is maximized.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035001"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12131189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving annotation efficiency for fully labeling a breast mass segmentation dataset. 提高乳腺质量分割数据的标注效率。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-05-21 DOI: 10.1117/1.JMI.12.3.035501
Vaibhav Sharma, Alina Jade Barnett, Julia Yang, Sangwook Cheon, Giyoung Kim, Fides Regina Schwartz, Avivah Wang, Neal Hall, Lars Grimm, Chaofan Chen, Joseph Y Lo, Cynthia Rudin

Purpose: Breast cancer remains a leading cause of death for women. Screening programs are deployed to detect cancer at early stages. One current barrier identified by breast imaging researchers is a shortage of labeled image datasets. Addressing this problem is crucial to improve early detection models. We present an active learning (AL) framework for segmenting breast masses from 2D digital mammography, and we publish labeled data. Our method aims to reduce the input needed from expert annotators to reach a fully labeled dataset.

Approach: We create a dataset of 1136 mammographic masses with pixel-wise binary segmentation labels, with the test subset labeled independently by two different teams. With this dataset, we simulate a human annotator within an AL framework to develop and compare AI-assisted labeling methods, using a discriminator model and a simulated oracle to collect acceptable segmentation labels. A UNet model is retrained on these labels, generating new segmentations. We evaluate various oracle heuristics using the percentage of segmentations that the oracle relabels and measure the quality of the proposed labels by evaluating the intersection over union over a validation dataset.

Results: Our method reduces expert annotator input by 44%. We present a dataset of 1136 binary segmentation labels approved by board-certified radiologists and make the 143-image validation set public for comparison with other researchers' methods.

Conclusions: We demonstrate that AL can significantly improve the efficiency and time-effectiveness of creating labeled mammogram datasets. Our framework facilitates the development of high-quality datasets while minimizing manual effort in the domain of digital mammography.

目的:乳腺癌仍然是妇女死亡的主要原因。筛查项目被用于在早期阶段发现癌症。目前乳房成像研究人员发现的一个障碍是缺乏标记的图像数据集。解决这个问题对于改进早期检测模型至关重要。我们提出了一个主动学习(AL)框架,用于从2D数字乳房x线摄影中分割乳房肿块,并发布了标记数据。我们的方法旨在减少专家注释者所需的输入,以达到完全标记的数据集。方法:我们创建了一个包含1136个乳腺肿块的数据集,其中包含逐像素的二值分割标签,测试子集由两个不同的团队独立标记。有了这个数据集,我们在一个人工智能框架内模拟了一个人类注释器,以开发和比较人工智能辅助标注方法,使用鉴别器模型和模拟oracle来收集可接受的分割标签。在这些标签上重新训练UNet模型,生成新的分割。我们使用oracle重新标记的分割百分比来评估各种oracle启发式方法,并通过评估验证数据集上的交集与并集来衡量建议标签的质量。结果:我们的方法将专家注释者的输入减少了44%。我们提出了1136个经委员会认证的放射科医生批准的二值分割标签的数据集,并将143个图像验证集公开,以便与其他研究人员的方法进行比较。结论:我们证明人工智能可以显著提高创建标记乳房x线照片数据集的效率和时效性。我们的框架促进了高质量数据集的开发,同时最大限度地减少了数字乳房x线摄影领域的人工工作量。
{"title":"Improving annotation efficiency for fully labeling a breast mass segmentation dataset.","authors":"Vaibhav Sharma, Alina Jade Barnett, Julia Yang, Sangwook Cheon, Giyoung Kim, Fides Regina Schwartz, Avivah Wang, Neal Hall, Lars Grimm, Chaofan Chen, Joseph Y Lo, Cynthia Rudin","doi":"10.1117/1.JMI.12.3.035501","DOIUrl":"10.1117/1.JMI.12.3.035501","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer remains a leading cause of death for women. Screening programs are deployed to detect cancer at early stages. One current barrier identified by breast imaging researchers is a shortage of labeled image datasets. Addressing this problem is crucial to improve early detection models. We present an active learning (AL) framework for segmenting breast masses from 2D digital mammography, and we publish labeled data. Our method aims to reduce the input needed from expert annotators to reach a fully labeled dataset.</p><p><strong>Approach: </strong>We create a dataset of 1136 mammographic masses with pixel-wise binary segmentation labels, with the test subset labeled independently by two different teams. With this dataset, we simulate a human annotator within an AL framework to develop and compare AI-assisted labeling methods, using a discriminator model and a simulated oracle to collect acceptable segmentation labels. A UNet model is retrained on these labels, generating new segmentations. We evaluate various oracle heuristics using the percentage of segmentations that the oracle relabels and measure the quality of the proposed labels by evaluating the intersection over union over a validation dataset.</p><p><strong>Results: </strong>Our method reduces expert annotator input by 44%. We present a dataset of 1136 binary segmentation labels approved by board-certified radiologists and make the 143-image validation set public for comparison with other researchers' methods.</p><p><strong>Conclusions: </strong>We demonstrate that AL can significantly improve the efficiency and time-effectiveness of creating labeled mammogram datasets. Our framework facilitates the development of high-quality datasets while minimizing manual effort in the domain of digital mammography.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035501"},"PeriodicalIF":1.9,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1