首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Deep-learning-based washout classification for decision support in contrast-enhanced ultrasound examinations of the liver. 基于深度学习的洗刷分类在肝脏超声造影检查中的决策支持。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-22 DOI: 10.1117/1.JMI.12.4.044502
Hannah Strohm, Sven Rothlübbers, Jürgen Jenne, Dirk-André Clevert, Thomas Fischer, Niklas Hitschrich, Bernhard Mumm, Paul Spiesecke, Matthias Günther

Purpose: Contrast-enhanced ultrasound (CEUS) is a reliable tool to diagnose focal liver lesions, which appear ambiguous in normal B-mode ultrasound. However, interpretation of the dynamic contrast sequences can be challenging, hindering the widespread application of CEUS. We investigate the use of a deep-learning-based image classifier for determining the diagnosis-relevant feature washout from CEUS acquisitions.

Approach: We introduce a data representation, which is agnostic to data heterogeneity regarding lesion size, subtype, and length of the sequences. Then, an image-based classifier is exploited for washout classification. Strategies to cope with sparse annotations and motion are systematically evaluated, as well as the potential benefits of using a perfusion model to cover missing time points.

Results: Results indicate decent performance comparable to studies found in the literature, with a maximum balanced accuracy of 84.0% on the validation and 82.0% on the test set. Correlation-based frame selection yielded improvements in classification performance, whereas further motion compensation did not show any benefit in the conducted experiments.

Conclusions: It is shown that deep-learning-based washout classification is feasible in principle. It offers a simple form of interpretability compared with benign versus malignant classifications. The concept of classifying individual features instead of the diagnosis itself could be extended to other features such as the arterial inflow behavior. The main factors distinguishing it from existing approaches are the data representation and task formulation, as well as a large dataset size with 500 liver lesions from two centers for algorithmic development and testing.

目的:对比增强超声(CEUS)是诊断局灶性肝脏病变的可靠工具,在正常b超中表现不明确。然而,动态对比序列的解释可能具有挑战性,阻碍了超声造影的广泛应用。我们研究了基于深度学习的图像分类器的使用,以确定从CEUS获取的诊断相关特征冲洗。方法:我们引入了一种数据表示,它与病变大小、亚型和序列长度的数据异质性无关。然后,利用基于图像的分类器进行冲洗分类。系统地评估了处理稀疏注释和运动的策略,以及使用灌注模型覆盖缺失时间点的潜在好处。结果:结果表明,与文献中发现的研究相比,性能良好,验证的最大平衡精度为84.0%,测试集的最大平衡精度为82.0%。基于相关性的帧选择提高了分类性能,而进一步的运动补偿在实验中没有显示出任何好处。结论:基于深度学习的水洗分类在原则上是可行的。与良性与恶性分类相比,它提供了一种简单的可解释性形式。分类个体特征而不是诊断本身的概念可以扩展到其他特征,如动脉流入行为。将其与现有方法区分开来的主要因素是数据表示和任务制定,以及来自两个算法开发和测试中心的500个肝脏病变的大型数据集。
{"title":"Deep-learning-based washout classification for decision support in contrast-enhanced ultrasound examinations of the liver.","authors":"Hannah Strohm, Sven Rothlübbers, Jürgen Jenne, Dirk-André Clevert, Thomas Fischer, Niklas Hitschrich, Bernhard Mumm, Paul Spiesecke, Matthias Günther","doi":"10.1117/1.JMI.12.4.044502","DOIUrl":"https://doi.org/10.1117/1.JMI.12.4.044502","url":null,"abstract":"<p><strong>Purpose: </strong>Contrast-enhanced ultrasound (CEUS) is a reliable tool to diagnose focal liver lesions, which appear ambiguous in normal B-mode ultrasound. However, interpretation of the dynamic contrast sequences can be challenging, hindering the widespread application of CEUS. We investigate the use of a deep-learning-based image classifier for determining the diagnosis-relevant feature washout from CEUS acquisitions.</p><p><strong>Approach: </strong>We introduce a data representation, which is agnostic to data heterogeneity regarding lesion size, subtype, and length of the sequences. Then, an image-based classifier is exploited for washout classification. Strategies to cope with sparse annotations and motion are systematically evaluated, as well as the potential benefits of using a perfusion model to cover missing time points.</p><p><strong>Results: </strong>Results indicate decent performance comparable to studies found in the literature, with a maximum balanced accuracy of 84.0% on the validation and 82.0% on the test set. Correlation-based frame selection yielded improvements in classification performance, whereas further motion compensation did not show any benefit in the conducted experiments.</p><p><strong>Conclusions: </strong>It is shown that deep-learning-based washout classification is feasible in principle. It offers a simple form of interpretability compared with benign versus malignant classifications. The concept of classifying individual features instead of the diagnosis itself could be extended to other features such as the arterial inflow behavior. The main factors distinguishing it from existing approaches are the data representation and task formulation, as well as a large dataset size with 500 liver lesions from two centers for algorithmic development and testing.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044502"},"PeriodicalIF":1.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12279466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing chemically crosslinked microbubble clusters using deep learning for ultrasound contrast imaging. 利用化学交联微泡簇进行超声对比成像的深度学习。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-12 DOI: 10.1117/1.JMI.12.4.047001
Teja Pathour, Ghazal Rastegar, Shashank R Sirsi, Baowei Fei

Purpose: We aim to investigate and isolate the distinctive acoustic properties generated by chemically crosslinked microbubble clusters (CCMCs) using machine learning (ML) techniques, specifically using an anomaly detection model based on autoencoders.

Approach: CCMCs were synthesized via copper-free click chemistry and subjected to acoustic analysis using a clinical transducer. Radiofrequency data were acquired, processed, and organized into training and testing datasets for the ML models. We trained an anomaly detection model with the nonclustered microbubbles (MBs) and tested the model on the CCMCs to isolate the unique acoustics. We also had a separate set of control experiments that was performed to validate the anomaly detection model.

Results: The anomaly detection model successfully identified frames exhibiting unique acoustic signatures associated with CCMCs. Frequency domain analysis further confirmed that these frames displayed higher amplitude and energy, suggesting the occurrence of potential coalescence events. The specificity of the model was validated through control experiments, in which both groups contained only individual MBs without clustering. As anticipated, no anomalies were detected in this control dataset, reinforcing the model's ability to distinguish clustered MBs from nonclustered ones.

Conclusions: We highlight the feasibility of detecting and distinguishing the unique acoustic characteristics of CCMCs, thereby improving the detectability and localization of contrast agents in ultrasound imaging. The elevated acoustic amplitudes produced by CCMCs offer potential advantages for more effective contrast agent detection, which is particularly valuable in super-resolution ultrasound imaging. Both the contrast agent and the ML-based analysis approach hold promise for a wide range of applications.

目的:我们的目标是利用机器学习(ML)技术,特别是基于自动编码器的异常检测模型,研究和分离化学交联微泡团簇(CCMCs)产生的独特声学特性。方法:通过无铜点击化学合成ccmc,并使用临床换能器进行声学分析。射频数据被采集、处理并组织成ML模型的训练和测试数据集。我们用非聚类微气泡(mb)训练了一个异常检测模型,并在ccmc上测试了该模型,以隔离独特的声学。我们还进行了一组单独的控制实验来验证异常检测模型。结果:异常检测模型成功地识别出与ccmc相关的具有独特声学特征的帧。频域分析进一步证实,这些帧显示出更高的振幅和能量,表明存在潜在的聚并事件。通过对照实验验证了模型的特异性,在对照实验中,两组均仅包含单个MBs,未聚类。正如预期的那样,在这个控制数据集中没有检测到异常,这加强了模型区分集群mb和非集群mb的能力。结论:我们强调了检测和区分ccmc独特声学特征的可行性,从而提高了造影剂在超声成像中的可检测性和定位性。ccmc产生的声振幅升高为更有效的造影剂检测提供了潜在的优势,这在超分辨率超声成像中特别有价值。造影剂和基于ml的分析方法都有广泛的应用前景。
{"title":"Harnessing chemically crosslinked microbubble clusters using deep learning for ultrasound contrast imaging.","authors":"Teja Pathour, Ghazal Rastegar, Shashank R Sirsi, Baowei Fei","doi":"10.1117/1.JMI.12.4.047001","DOIUrl":"10.1117/1.JMI.12.4.047001","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to investigate and isolate the distinctive acoustic properties generated by chemically crosslinked microbubble clusters (CCMCs) using machine learning (ML) techniques, specifically using an anomaly detection model based on autoencoders.</p><p><strong>Approach: </strong>CCMCs were synthesized via copper-free click chemistry and subjected to acoustic analysis using a clinical transducer. Radiofrequency data were acquired, processed, and organized into training and testing datasets for the ML models. We trained an anomaly detection model with the nonclustered microbubbles (MBs) and tested the model on the CCMCs to isolate the unique acoustics. We also had a separate set of control experiments that was performed to validate the anomaly detection model.</p><p><strong>Results: </strong>The anomaly detection model successfully identified frames exhibiting unique acoustic signatures associated with CCMCs. Frequency domain analysis further confirmed that these frames displayed higher amplitude and energy, suggesting the occurrence of potential coalescence events. The specificity of the model was validated through control experiments, in which both groups contained only individual MBs without clustering. As anticipated, no anomalies were detected in this control dataset, reinforcing the model's ability to distinguish clustered MBs from nonclustered ones.</p><p><strong>Conclusions: </strong>We highlight the feasibility of detecting and distinguishing the unique acoustic characteristics of CCMCs, thereby improving the detectability and localization of contrast agents in ultrasound imaging. The elevated acoustic amplitudes produced by CCMCs offer potential advantages for more effective contrast agent detection, which is particularly valuable in super-resolution ultrasound imaging. Both the contrast agent and the ML-based analysis approach hold promise for a wide range of applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"047001"},"PeriodicalIF":1.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12255354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRN+: a simplified generative reinforcement network for tissue layer analysis in 3D ultrasound images for chronic low-back pain. GRN+:用于慢性腰痛三维超声图像组织层分析的简化生成强化网络。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-31 DOI: 10.1117/1.JMI.12.4.044001
Zixue Zeng, Xiaoyan Zhao, Matthew Cartier, Xin Meng, Jiantao Pu

Purpose: 3D ultrasound delivers high-resolution, real-time images of soft tissues, which are essential for pain research. However, manually distinguishing various tissues for quantitative analysis is labor-intensive. We aimed to automate multilayer segmentation in 3D ultrasound volumes using minimal annotated data by developing generative reinforcement network plus (GRN+), a semi-supervised multi-model framework.

Approach: GRN+ integrates a ResNet-based generator and a U-Net segmentation model. Through a method called segmentation-guided enhancement (SGE), the generator produces new images under the guidance of the segmentation model, with its weights adjusted according to the segmentation loss gradient. To prevent gradient explosion and secure stable training, a two-stage backpropagation strategy was implemented: the first stage propagates the segmentation loss through both the generator and segmentation model, whereas the second stage concentrates on optimizing the segmentation model alone, thereby refining mask prediction using the generated images.

Results: Tested on 69 fully annotated 3D ultrasound scans from 29 subjects with six manually labeled tissue layers, GRN+ outperformed all other semi-supervised methods in terms of the Dice coefficient using only 5% labeled data, despite not using unlabeled data for unsupervised training. In addition, when applied to fully annotated datasets, GRN+ with SGE achieved a 2.16% higher Dice coefficient while incurring lower computational costs compared to other models.

Conclusions: GRN+ provides accurate tissue segmentation while reducing both computational expenses and the dependency on extensive annotations, making it an effective tool for 3D ultrasound analysis in patients with chronic lower back pain.

目的:三维超声提供高分辨率、实时的软组织图像,这对疼痛研究至关重要。然而,手工区分各种组织进行定量分析是劳动密集型的。我们的目标是通过开发生成强化网络+ (GRN+),一种半监督多模型框架,使用最少的注释数据,在3D超声体积中自动进行多层分割。方法:GRN+集成了基于resnet的生成器和U-Net分割模型。通过一种称为分割引导增强(SGE)的方法,生成器在分割模型的指导下生成新图像,并根据分割损失梯度调整其权重。为了防止梯度爆炸和保证训练的稳定性,采用了两阶段反向传播策略:第一阶段通过生成器和分割模型传播分割损失,而第二阶段专注于单独优化分割模型,从而利用生成的图像改进掩码预测。结果:对来自29名受试者的69个完全注释的3D超声扫描进行了测试,其中包含6个手动标记的组织层,尽管没有使用未标记的数据进行无监督训练,但仅使用5%的标记数据,GRN+在Dice系数方面优于所有其他半监督方法。此外,当应用于完全注释的数据集时,与其他模型相比,具有SGE的GRN+的Dice系数提高了2.16%,而计算成本更低。结论:GRN+提供了准确的组织分割,同时减少了计算费用和对大量注释的依赖,使其成为慢性下背痛患者三维超声分析的有效工具。
{"title":"GRN+: a simplified generative reinforcement network for tissue layer analysis in 3D ultrasound images for chronic low-back pain.","authors":"Zixue Zeng, Xiaoyan Zhao, Matthew Cartier, Xin Meng, Jiantao Pu","doi":"10.1117/1.JMI.12.4.044001","DOIUrl":"10.1117/1.JMI.12.4.044001","url":null,"abstract":"<p><strong>Purpose: </strong>3D ultrasound delivers high-resolution, real-time images of soft tissues, which are essential for pain research. However, manually distinguishing various tissues for quantitative analysis is labor-intensive. We aimed to automate multilayer segmentation in 3D ultrasound volumes using minimal annotated data by developing generative reinforcement network plus (GRN+), a semi-supervised multi-model framework.</p><p><strong>Approach: </strong>GRN+ integrates a ResNet-based generator and a U-Net segmentation model. Through a method called segmentation-guided enhancement (SGE), the generator produces new images under the guidance of the segmentation model, with its weights adjusted according to the segmentation loss gradient. To prevent gradient explosion and secure stable training, a two-stage backpropagation strategy was implemented: the first stage propagates the segmentation loss through both the generator and segmentation model, whereas the second stage concentrates on optimizing the segmentation model alone, thereby refining mask prediction using the generated images.</p><p><strong>Results: </strong>Tested on 69 fully annotated 3D ultrasound scans from 29 subjects with six manually labeled tissue layers, GRN+ outperformed all other semi-supervised methods in terms of the Dice coefficient using only 5% labeled data, despite not using unlabeled data for unsupervised training. In addition, when applied to fully annotated datasets, GRN+ with SGE achieved a 2.16% higher Dice coefficient while incurring lower computational costs compared to other models.</p><p><strong>Conclusions: </strong>GRN+ provides accurate tissue segmentation while reducing both computational expenses and the dependency on extensive annotations, making it an effective tool for 3D ultrasound analysis in patients with chronic lower back pain.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044001"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310559/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of phantom design on evaluation metrics in photon counting spectral head CT: a simulation study. 光体设计对光子计数谱头CT评价指标影响的模拟研究。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-12 DOI: 10.1117/1.JMI.12.4.043501
Bahaa Ghammraoui, Mridul Bhattarai, Harsha Marupudi, Stephen J Glick

Purpose: Accurate iodine quantification in contrast-enhanced head CT is crucial for precise diagnosis and treatment planning. Traditional CT methods, which use energy-integrating detectors and dual-exposure techniques for material discrimination, often increase patient radiation exposure and are susceptible to motion artifacts and spectral resolution loss. Photon counting detectors (PCDs), capable of acquiring multiple energy windows in a single exposure with superior energy resolution, offer a promising alternative. However, the adoption of these technological advancements requires corresponding developments in evaluation methodologies to ensure their safe and effective implementation. One critical area of concern is the accuracy of iodine quantification, which is commonly assessed using cylindrical phantoms that neither replicate the shape of the human head nor incorporate skull-mimicking materials. These phantoms are widely used not only for testing but also for calibration, which may contribute to an overestimation of system performance in clinical applications. We address the impact of phantom design on evaluation metrics in spectral head CT, comparing conventional cylindrical phantoms to anatomically realistic elliptical phantoms with skull simulants.

Approach: We conducted simulations using a photon-counting spectral CT system equipped with cadmium telluride (CdTe) detectors, utilizing the Photon Counting Toolkit and Tigre CT software for detector response and CT geometry simulations. We compared cylindrical phantoms (20 cm diameter) to elliptical phantoms in three different sizes, incorporating skull materials with major/minor diameters and skull thicknesses of 18/14/0.5, 20/16/0.6, and 23/18/0.7 cm. Iodine inserts at concentrations of 0, 2, 5, and 10    mg / mL with diameters of 1, 0.5, and 0.3 cm were used. We evaluated the influence of bowtie filters, various tube currents, and operating voltages. Image reconstruction was performed after beam hardening correction using the signal-to-thickness calibration (STC) method with standard filtered back projection, followed by both image-based and projection-based material decomposition.

Results: The results showed that image-based methods were more sensitive to phantom design, with cylindrical phantoms exhibiting enhanced performance compared with anatomically realistic designs across key metrics, including systematic error, root mean square error (RMSE), and precision. By contrast, the projection-based material decomposition method demonstrated greater consistency across different phantom designs and improved accuracy and precision. This highlights its potential for more reliable iodine quantification in complex geometries.

Conclusions: These findings underscore the critical importance of phantom design, especially the inclusion

目的:增强头部CT碘定量对准确诊断和制定治疗方案至关重要。传统的CT方法使用能量积分检测器和双曝光技术进行材料识别,通常会增加患者的辐射暴露,并且容易产生运动伪影和光谱分辨率损失。光子计数探测器(PCDs)能够在单次曝光中以优异的能量分辨率获取多个能量窗口,提供了一个有希望的替代方案。然而,采用这些技术进步需要在评价方法方面作出相应的发展,以确保其安全和有效的执行。一个值得关注的关键领域是碘定量的准确性,通常使用圆柱形的模型进行评估,这些模型既不能复制人类头部的形状,也不能使用模拟头骨的材料。这些幻影不仅广泛用于测试,而且还用于校准,这可能导致在临床应用中对系统性能的高估。我们讨论了幽灵设计对光谱头部CT评估指标的影响,比较了传统的圆柱形幽灵与颅骨模拟的解剖学逼真的椭圆形幽灵。方法:我们使用配备碲化镉(CdTe)探测器的光子计数光谱CT系统进行模拟,利用光子计数工具包和Tigre CT软件进行探测器响应和CT几何模拟。我们比较了三种不同尺寸的圆柱形幻影(直径20 cm)和椭圆形幻影,颅骨材料的主要/小直径和颅骨厚度分别为18/14/0.5、20/16/0.6和23/18/0.7 cm。碘插入物浓度分别为0、2、5和10 mg / mL,直径分别为1、0.5和0.3 cm。我们评估了领结滤波器、各种管电流和工作电压的影响。在使用标准滤波后投影的信号-厚度校准(STC)方法进行光束硬化校正后,进行图像重建,然后进行基于图像和基于投影的材料分解。结果:结果表明,基于图像的方法对模型设计更敏感,在系统误差、均方根误差(RMSE)和精度等关键指标上,圆柱形模型比解剖学真实的模型表现出更高的性能。相比之下,基于投影的材料分解方法在不同的模体设计中表现出更大的一致性,并提高了精度和精度。这突出了它在复杂几何形状中更可靠的碘定量的潜力。结论:这些发现强调了假体设计的重要性,特别是在定量结果评估中包含模拟颅骨的材料。通常用于校准和测试的圆柱形幻影,由于其简化的几何形状,可能会高估头部CT碘定量的性能。我们强调需要采用解剖学上真实的幻像设计,例如带有颅骨模拟物的椭圆幻像,以便对光谱光子计数头部CT系统进行更具临床相关性和准确性的评估。
{"title":"Influence of phantom design on evaluation metrics in photon counting spectral head CT: a simulation study.","authors":"Bahaa Ghammraoui, Mridul Bhattarai, Harsha Marupudi, Stephen J Glick","doi":"10.1117/1.JMI.12.4.043501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.4.043501","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate iodine quantification in contrast-enhanced head CT is crucial for precise diagnosis and treatment planning. Traditional CT methods, which use energy-integrating detectors and dual-exposure techniques for material discrimination, often increase patient radiation exposure and are susceptible to motion artifacts and spectral resolution loss. Photon counting detectors (PCDs), capable of acquiring multiple energy windows in a single exposure with superior energy resolution, offer a promising alternative. However, the adoption of these technological advancements requires corresponding developments in evaluation methodologies to ensure their safe and effective implementation. One critical area of concern is the accuracy of iodine quantification, which is commonly assessed using cylindrical phantoms that neither replicate the shape of the human head nor incorporate skull-mimicking materials. These phantoms are widely used not only for testing but also for calibration, which may contribute to an overestimation of system performance in clinical applications. We address the impact of phantom design on evaluation metrics in spectral head CT, comparing conventional cylindrical phantoms to anatomically realistic elliptical phantoms with skull simulants.</p><p><strong>Approach: </strong>We conducted simulations using a photon-counting spectral CT system equipped with cadmium telluride (CdTe) detectors, utilizing the Photon Counting Toolkit and Tigre CT software for detector response and CT geometry simulations. We compared cylindrical phantoms (20 cm diameter) to elliptical phantoms in three different sizes, incorporating skull materials with major/minor diameters and skull thicknesses of 18/14/0.5, 20/16/0.6, and 23/18/0.7 cm. Iodine inserts at concentrations of 0, 2, 5, and <math><mrow><mn>10</mn> <mtext>  </mtext> <mi>mg</mi> <mo>/</mo> <mi>mL</mi></mrow> </math> with diameters of 1, 0.5, and 0.3 cm were used. We evaluated the influence of bowtie filters, various tube currents, and operating voltages. Image reconstruction was performed after beam hardening correction using the signal-to-thickness calibration (STC) method with standard filtered back projection, followed by both image-based and projection-based material decomposition.</p><p><strong>Results: </strong>The results showed that image-based methods were more sensitive to phantom design, with cylindrical phantoms exhibiting enhanced performance compared with anatomically realistic designs across key metrics, including systematic error, root mean square error (RMSE), and precision. By contrast, the projection-based material decomposition method demonstrated greater consistency across different phantom designs and improved accuracy and precision. This highlights its potential for more reliable iodine quantification in complex geometries.</p><p><strong>Conclusions: </strong>These findings underscore the critical importance of phantom design, especially the inclusion","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"043501"},"PeriodicalIF":1.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12254834/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZeroReg3D: a zero-shot registration pipeline for 3D consecutive histopathology image reconstruction. ZeroReg3D:用于三维连续组织病理学图像重建的零镜头配准管道。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-08-05 DOI: 10.1117/1.JMI.12.4.044002
Juming Xiong, Ruining Deng, Jialin Yue, Siqi Lu, Junlin Guo, Marilyn Lionts, Tianyuan Yao, Can Cui, Junchao Zhu, Chongyu Qu, Yuechen Yang, Mengmeng Yin, Haichun Yang, Yuankai Huo

Purpose: Histological analysis plays a crucial role in understanding tissue structure and pathology. Although recent advancements in registration methods have improved 2D histological analysis, they often struggle to preserve critical 3D spatial relationships, limiting their utility in both clinical and research applications. Specifically, constructing accurate 3D models from 2D slices remains challenging due to tissue deformation, sectioning artifacts, variability in imaging techniques, and inconsistent illumination. Deep learning-based registration methods have demonstrated improved performance but suffer from limited generalizability and require large-scale training data. In contrast, non-deep-learning approaches offer better generalizability but often compromise on accuracy.

Approach: We introduce ZeroReg3D, a zero-shot registration pipeline that integrates zero-shot deep learning-based keypoint matching and non-deep-learning registration techniques to effectively mitigate deformation and sectioning artifacts without requiring extensive training data.

Results: Comprehensive evaluations demonstrate that our pairwise 2D image registration method improves registration accuracy by 10 % over baseline methods, outperforming existing strategies in both accuracy and robustness. High-fidelity 3D reconstructions further validate the effectiveness of our approach, establishing ZeroReg3D as a reliable framework for precise 3D reconstruction from consecutive 2D histological images.

Conclusions: We introduced ZeroReg3D, a zero-shot registration pipeline tailored for accurate 3D reconstruction from serial histological sections. By combining zero-shot deep learning-based keypoint matching with optimization-based affine and non-rigid registration techniques, ZeroReg3D effectively addresses critical challenges such as tissue deformation, sectioning artifacts, staining variability, and inconsistent illumination without requiring retraining or fine-tuning.

目的:组织学分析在了解组织结构和病理方面起着重要作用。尽管最近注册方法的进步改善了二维组织学分析,但它们往往难以保持关键的三维空间关系,限制了它们在临床和研究应用中的效用。具体来说,由于组织变形、切片伪影、成像技术的可变性和光照不一致,从2D切片构建准确的3D模型仍然具有挑战性。基于深度学习的配准方法已经证明了性能的提高,但泛化能力有限,并且需要大规模的训练数据。相比之下,非深度学习方法提供了更好的泛化性,但往往在准确性上有所妥协。方法:我们引入了ZeroReg3D,这是一种零拍摄配准管道,集成了基于零拍摄深度学习的关键点匹配和非深度学习配准技术,可以有效地减轻变形和切片工件,而不需要大量的训练数据。结果:综合评估表明,我们的两两二维图像配准方法比基线方法提高了配准精度约10%,在准确性和鲁棒性方面都优于现有策略。高保真三维重建进一步验证了我们方法的有效性,建立了ZeroReg3D作为从连续二维组织学图像进行精确三维重建的可靠框架。结论:我们引入了ZeroReg3D,这是一种专为连续组织学切片精确三维重建而定制的零射击配准管道。通过将基于零射击深度学习的关键点匹配与基于优化的仿射和非刚性配准技术相结合,ZeroReg3D有效地解决了组织变形、切片伪影、染色变异性和光照不一致等关键挑战,而无需重新训练或微调。
{"title":"ZeroReg3D: a zero-shot registration pipeline for 3D consecutive histopathology image reconstruction.","authors":"Juming Xiong, Ruining Deng, Jialin Yue, Siqi Lu, Junlin Guo, Marilyn Lionts, Tianyuan Yao, Can Cui, Junchao Zhu, Chongyu Qu, Yuechen Yang, Mengmeng Yin, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.4.044002","DOIUrl":"10.1117/1.JMI.12.4.044002","url":null,"abstract":"<p><strong>Purpose: </strong>Histological analysis plays a crucial role in understanding tissue structure and pathology. Although recent advancements in registration methods have improved 2D histological analysis, they often struggle to preserve critical 3D spatial relationships, limiting their utility in both clinical and research applications. Specifically, constructing accurate 3D models from 2D slices remains challenging due to tissue deformation, sectioning artifacts, variability in imaging techniques, and inconsistent illumination. Deep learning-based registration methods have demonstrated improved performance but suffer from limited generalizability and require large-scale training data. In contrast, non-deep-learning approaches offer better generalizability but often compromise on accuracy.</p><p><strong>Approach: </strong>We introduce ZeroReg3D, a zero-shot registration pipeline that integrates zero-shot deep learning-based keypoint matching and non-deep-learning registration techniques to effectively mitigate deformation and sectioning artifacts without requiring extensive training data.</p><p><strong>Results: </strong>Comprehensive evaluations demonstrate that our pairwise 2D image registration method improves registration accuracy by <math><mrow><mo>∼</mo> <mn>10</mn> <mo>%</mo></mrow> </math> over baseline methods, outperforming existing strategies in both accuracy and robustness. High-fidelity 3D reconstructions further validate the effectiveness of our approach, establishing ZeroReg3D as a reliable framework for precise 3D reconstruction from consecutive 2D histological images.</p><p><strong>Conclusions: </strong>We introduced ZeroReg3D, a zero-shot registration pipeline tailored for accurate 3D reconstruction from serial histological sections. By combining zero-shot deep learning-based keypoint matching with optimization-based affine and non-rigid registration techniques, ZeroReg3D effectively addresses critical challenges such as tissue deformation, sectioning artifacts, staining variability, and inconsistent illumination without requiring retraining or fine-tuning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044002"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12322837/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JMI's Special Issues and Shared Journeys. JMI的特别议题和共同旅程。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-08-29 DOI: 10.1117/1.JMI.12.4.040101
Bennett A Landman

The editorial discusses current JMI special sections/issues and calls for papers.

该社论讨论了当前JMI的特殊部分/问题和论文征集。
{"title":"JMI's Special Issues and Shared Journeys.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.4.040101","DOIUrl":"10.1117/1.JMI.12.4.040101","url":null,"abstract":"<p><p>The editorial discusses current JMI special sections/issues and calls for papers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"040101"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12395497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-based compression method for scale-preserving in VNIR and SWIR hyperspectral data. 基于小波压缩的近红外和SWIR高光谱数据尺度保持方法。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-23 DOI: 10.1117/1.JMI.12.4.044503
Hridoy Biswas, Rui Tang, Shamim Mollah, Mikhail Y Berezin

Purpose: Hyperspectral imaging (HSI) collects detailed spectral information across hundreds of narrow bands, providing valuable datasets for applications such as medical diagnostics. However, the large size of HSI datasets, often exceeding several gigabytes, creates significant challenges in data transmission, storage, and processing. We aim to develop a wavelet-based compression method that addresses these challenges while preserving the integrity and quality of spectral information.

Approach: The proposed method applies wavelet transforms to the spectral dimension of hyperspectral data in three steps: (1) wavelet transformation for dimensionality reduction, (2) spectral cropping to eliminate low-intensity bands, and (3) scale matching to maintain accurate wavelength mapping. Daubechies wavelets were used to achieve up to 32× compression while ensuring spectral fidelity and spatial feature retention.

Results: The wavelet-based method achieved up to 32× compression, corresponding to a 96.88% reduction in data size without significant loss of important data. Unlike principal component analysis and independent component analysis, the method preserved the original wavelength scale, enabling straightforward spectral interpretation. In addition, the compressed data exhibited minimal loss in spatial features, with improvements in contrast and noise reduction compared with spectral binning.

Conclusions: We demonstrate that wavelet-based compression is an effective solution for managing large HSI datasets in medical imaging. The method preserves critical spectral and spatial information and therefore facilitates efficient data storage and processing, providing a way for the practical integration of HSI technology in clinical applications.

目的:高光谱成像(HSI)收集数百个窄带的详细光谱信息,为医疗诊断等应用提供有价值的数据集。然而,大型HSI数据集(通常超过几gb)在数据传输、存储和处理方面带来了重大挑战。我们的目标是开发一种基于小波的压缩方法来解决这些挑战,同时保持光谱信息的完整性和质量。方法:该方法将小波变换应用于高光谱数据的光谱维数,分三步进行:(1)小波变换降维;(2)光谱裁剪去除低强度波段;(3)尺度匹配保持准确的波长映射。Daubechies小波用于实现高达32倍的压缩,同时确保光谱保真度和空间特征保留。结果:基于小波的方法实现了高达32倍的压缩,相当于减少了96.88%的数据大小,而没有明显的重要数据丢失。与主成分分析和独立成分分析不同,该方法保留了原始波长尺度,可以直接进行光谱解释。此外,压缩后的数据在空间特征上的损失最小,与光谱分形相比,在对比度和降噪方面有所改善。结论:我们证明基于小波的压缩是管理医学成像中大型HSI数据集的有效解决方案。该方法保留了关键的光谱和空间信息,从而促进了有效的数据存储和处理,为HSI技术在临床应用中的实际集成提供了一种方法。
{"title":"Wavelet-based compression method for scale-preserving in VNIR and SWIR hyperspectral data.","authors":"Hridoy Biswas, Rui Tang, Shamim Mollah, Mikhail Y Berezin","doi":"10.1117/1.JMI.12.4.044503","DOIUrl":"10.1117/1.JMI.12.4.044503","url":null,"abstract":"<p><strong>Purpose: </strong>Hyperspectral imaging (HSI) collects detailed spectral information across hundreds of narrow bands, providing valuable datasets for applications such as medical diagnostics. However, the large size of HSI datasets, often exceeding several gigabytes, creates significant challenges in data transmission, storage, and processing. We aim to develop a wavelet-based compression method that addresses these challenges while preserving the integrity and quality of spectral information.</p><p><strong>Approach: </strong>The proposed method applies wavelet transforms to the spectral dimension of hyperspectral data in three steps: (1) wavelet transformation for dimensionality reduction, (2) spectral cropping to eliminate low-intensity bands, and (3) scale matching to maintain accurate wavelength mapping. Daubechies wavelets were used to achieve up to 32× compression while ensuring spectral fidelity and spatial feature retention.</p><p><strong>Results: </strong>The wavelet-based method achieved up to 32× compression, corresponding to a 96.88% reduction in data size without significant loss of important data. Unlike principal component analysis and independent component analysis, the method preserved the original wavelength scale, enabling straightforward spectral interpretation. In addition, the compressed data exhibited minimal loss in spatial features, with improvements in contrast and noise reduction compared with spectral binning.</p><p><strong>Conclusions: </strong>We demonstrate that wavelet-based compression is an effective solution for managing large HSI datasets in medical imaging. The method preserves critical spectral and spatial information and therefore facilitates efficient data storage and processing, providing a way for the practical integration of HSI technology in clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044503"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12285520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physician-guided deep learning model for assessing thymic epithelial tumor volume. 医师引导的胸腺上皮肿瘤体积评估深度学习模型。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-08-13 DOI: 10.1117/1.JMI.12.4.046501
Nirmal Choradia, Nathan Lay, Alex Chen, James Latanski, Meredith McAdams, Shannon Swift, Christine Feierabend, Testi Sherif, Susan Sansone, Laercio DaSilva, James L Gulley, Arlene Sirajuddin, Stephanie Harmon, Arun Rajan, Baris Turkbey, Chen Zhao

Purpose: The Response Evaluation Criteria in Solid Tumors (RECIST) relies solely on one-dimensional measurements to evaluate tumor response to treatments. However, thymic epithelial tumors (TETs), which frequently metastasize to the pleural cavity, exhibit a curvilinear morphology that complicates accurate measurement. To address this, we developed a physician-guided deep learning model and performed a retrospective study based on a patient cohort derived from clinical trials, aiming at efficient and reproducible volumetric assessments of TETs.

Approach: We used 231 computed tomography scans comprising 572 TETs from 81 patients. Tumors within the scans were identified and manually outlined to develop a ground truth that was used to measure model performance. TETs were characterized by their general location within the chest cavity: lung parenchyma, pleura, or mediastinum. Model performance was quantified on an unseen test set of 61 scans by mask Dice similarity coefficient (DSC), tumor DSC, absolute volume difference, and relative volume difference.

Results: We included 81 patients: 47 (58.0%) had thymic carcinoma; the remaining patients had thymoma B1, B2, B2/B3, or B3. The artificial intelligence (AI) model achieved an overall DSC of 0.77 per scan when provided with boxes surrounding the tumors as identified by physicians, corresponding to a mean absolute volume difference between the AI measurement and the ground truth of 16.1    cm 3 and a mean relative volume difference of 22%.

Conclusion: We have successfully developed a robust annotation workflow and AI segmentation model for analyzing advanced TETs. The model has been integrated into the Picture Archiving and Communication System alongside RECIST measurements to enhance outcome assessments for patients with metastatic TETs.

目的:实体肿瘤反应评价标准(RECIST)仅依赖于一维测量来评估肿瘤对治疗的反应。然而,胸腺上皮肿瘤(TETs)经常转移到胸膜腔,表现出曲线形态,使精确测量复杂化。为了解决这个问题,我们开发了一个医生指导的深度学习模型,并基于来自临床试验的患者队列进行了一项回顾性研究,旨在对TETs进行有效和可重复的体积评估。方法:我们使用了231次计算机断层扫描,包括来自81名患者的572次tet。扫描中的肿瘤被识别并手动勾画出来,以建立一个用于测量模型性能的基本事实。tet的特征在于其在胸腔内的一般位置:肺实质、胸膜或纵隔。通过掩模骰子相似系数(DSC)、肿瘤DSC、绝对体积差和相对体积差对61次扫描的未见测试集的模型性能进行量化。结果:我们纳入81例患者:47例(58.0%)患有胸腺癌;其余患者为胸腺瘤B1、B2、B2/B3或B3。当提供医生识别的肿瘤周围的盒子时,人工智能(AI)模型每次扫描的总体DSC为0.77,对应于AI测量值与地面真实值之间的平均绝对体积差为16.1 cm 3,平均相对体积差为22%。结论:我们成功开发了一个鲁棒的注释工作流和AI分割模型,用于分析高级考试。该模型已与RECIST测量一起集成到图像存档和通信系统中,以增强对转移性tet患者的结果评估。
{"title":"Physician-guided deep learning model for assessing thymic epithelial tumor volume.","authors":"Nirmal Choradia, Nathan Lay, Alex Chen, James Latanski, Meredith McAdams, Shannon Swift, Christine Feierabend, Testi Sherif, Susan Sansone, Laercio DaSilva, James L Gulley, Arlene Sirajuddin, Stephanie Harmon, Arun Rajan, Baris Turkbey, Chen Zhao","doi":"10.1117/1.JMI.12.4.046501","DOIUrl":"10.1117/1.JMI.12.4.046501","url":null,"abstract":"<p><strong>Purpose: </strong>The Response Evaluation Criteria in Solid Tumors (RECIST) relies solely on one-dimensional measurements to evaluate tumor response to treatments. However, thymic epithelial tumors (TETs), which frequently metastasize to the pleural cavity, exhibit a curvilinear morphology that complicates accurate measurement. To address this, we developed a physician-guided deep learning model and performed a retrospective study based on a patient cohort derived from clinical trials, aiming at efficient and reproducible volumetric assessments of TETs.</p><p><strong>Approach: </strong>We used 231 computed tomography scans comprising 572 TETs from 81 patients. Tumors within the scans were identified and manually outlined to develop a ground truth that was used to measure model performance. TETs were characterized by their general location within the chest cavity: lung parenchyma, pleura, or mediastinum. Model performance was quantified on an unseen test set of 61 scans by mask Dice similarity coefficient (DSC), tumor DSC, absolute volume difference, and relative volume difference.</p><p><strong>Results: </strong>We included 81 patients: 47 (58.0%) had thymic carcinoma; the remaining patients had thymoma B1, B2, B2/B3, or B3. The artificial intelligence (AI) model achieved an overall DSC of 0.77 per scan when provided with boxes surrounding the tumors as identified by physicians, corresponding to a mean absolute volume difference between the AI measurement and the ground truth of <math><mrow><mn>16.1</mn> <mtext>  </mtext> <msup><mrow><mi>cm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> </mrow> </math> and a mean relative volume difference of 22%.</p><p><strong>Conclusion: </strong>We have successfully developed a robust annotation workflow and AI segmentation model for analyzing advanced TETs. The model has been integrated into the Picture Archiving and Communication System alongside RECIST measurements to enhance outcome assessments for patients with metastatic TETs.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"046501"},"PeriodicalIF":1.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12344731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144849395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAFL-Attack: a targeted attack method against deep learning-based medical image segmentation models. mafl攻击:一种针对基于深度学习的医学图像分割模型的针对性攻击方法。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-07-01 Epub Date: 2025-07-16 DOI: 10.1117/1.JMI.12.4.044501
Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao

Purpose: Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.

Approach: We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.

Results: Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, L 2 , L , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.

Conclusions: The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.

目的:基于深度学习的医学图像分割在计算机辅助医学诊断中起着至关重要的作用。然而,它们仍然容易受到难以察觉的对抗性攻击,从而导致临床实践中的潜在误诊。对抗性攻击方法的研究有助于提高医学图像分割模型的鲁棒性设计。目前,针对基于深度学习的医学图像分割模型,缺乏对抗性攻击方法的研究。现有的攻击方法通常在攻击效果和对抗性示例的图像质量方面都很差,并且主要集中在非目标攻击上。为了解决这些限制并进一步研究分割模型上的对抗性攻击,我们提出了一种对抗性攻击方法。方法:提出一种动量驱动自适应特征余弦相似度低频约束攻击(maff - attack)方法。所提出的特征余弦相似度损失使用高级抽象语义信息来干扰模型对对抗性示例的理解。低频分量约束通过对低频分量的约束,保证了对抗性样本的不可感知性。此外,利用动量和动态步长计算器来增强攻击过程。结果:实验结果表明,与现有的自适应分割掩码攻击方法相比,mafl攻击生成的对抗性样本在相交/并、准确率、l2、L∞、峰值信噪比和结构相似度指标度量等评价指标上具有更好的目标攻击效果。结论:mafl攻击的设计思想启发研究者采取相应的防御措施来增强分割模型的鲁棒性。
{"title":"MAFL-Attack: a targeted attack method against deep learning-based medical image segmentation models.","authors":"Junmei Sun, Xin Zhang, Xiumei Li, Lei Xiao, Huang Bai, Meixi Wang, Maoqun Yao","doi":"10.1117/1.JMI.12.4.044501","DOIUrl":"10.1117/1.JMI.12.4.044501","url":null,"abstract":"<p><strong>Purpose: </strong>Medical image segmentation based on deep learning has played a crucial role in computer-aided medical diagnosis. However, they are still vulnerable to imperceptible adversarial attacks, which lead to potential misdiagnosis in clinical practice. Research on adversarial attack methods is beneficial for improving the robustness design of medical image segmentation models. Currently, there is a lack of research on adversarial attack methods toward deep learning-based medical image segmentation models. Existing attack methods often yield poor results in terms of both attack effects and image quality of adversarial examples and primarily focus on nontargeted attacks. To address these limitations and further investigate adversarial attacks on segmentation models, we propose an adversarial attack approach.</p><p><strong>Approach: </strong>We propose an approach called momentum-driven adaptive feature-cosine-similarity with low-frequency constraint attack (MAFL-Attack). The proposed feature-cosine-similarity loss uses high-level abstract semantic information to interfere with the understanding of models about adversarial examples. The low-frequency component constraint ensures the imperceptibility of adversarial examples by constraining the low-frequency components. In addition, the momentum and dynamic step-size calculator are used to enhance the attack process.</p><p><strong>Results: </strong>Experimental results demonstrate that MAFL-Attack generates adversarial examples with superior targeted attack effects compared with the existing Adaptive Segmentation Mask Attack method, in terms of the evaluation metrics of Intersection over Union, accuracy, <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mn>2</mn></mrow> </msub> </mrow> </math> , <math> <mrow> <msub><mrow><mi>L</mi></mrow> <mrow><mo>∞</mo></mrow> </msub> </mrow> </math> , Peak Signal to Noise Ratio, and Structure Similarity Index Measure.</p><p><strong>Conclusions: </strong>The design idea of the MAFL-Attack inspires researchers to take corresponding defensive measures to strengthen the robustness of segmentation models.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044501"},"PeriodicalIF":1.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12266980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LED-based, real-time, hyperspectral imaging device. 基于led,实时,高光谱成像设备。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-05-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.3.035002
Naeeme Modir, Maysam Shahedi, James Dormer, Ling Ma, Baowei Fei

Purpose: This study demonstrates the feasibility of using an LED array for hyperspectral imaging (HSI). The prototype validates the concept and provides insights into the design of future HSI applications. Our goal is to design, develop, and test a real-time, LED-based HSI prototype as a proof-of-principle device for in situ hyperspectral imaging using LEDs.

Approach: A prototype was designed based on a multiwavelength LED array and a monochrome camera and was tested to investigate the properties of the LED-based HSI. The LED array consisted of 18 LEDs in 18 different wavelengths from 405 nm to 910 nm. The performance of the imaging system was evaluated on different normal and cancerous ex vivo tissues. The impact of imaging conditions on the HSI quality was investigated. The LED-based HSI device was compared with a reference hyperspectral camera system.

Results: The hyperspectral signatures of different imaging targets were acquired using our prototype HSI device, which are comparable to the data obtained using the reference HSI system.

Conclusions: The feasibility of employing a spectral LED array as the illumination source for high-speed and high-quality HSI has been demonstrated. The use of LEDs for HSI can open the door to numerous applications in endoscopic, laparoscopic, and handheld HSI devices.

目的:本研究证明了LED阵列用于高光谱成像(HSI)的可行性。该原型验证了这一概念,并为未来HSI应用的设计提供了见解。我们的目标是设计、开发和测试一个实时的、基于led的HSI原型,作为使用led进行原位高光谱成像的原理验证设备。方法:基于多波长LED阵列和单色相机设计了一个原型,并进行了测试,以研究基于LED的HSI的特性。LED阵列由18个不同波长的LED组成,波长从405 nm到910 nm不等。在不同的正常和癌变离体组织上评估了成像系统的性能。研究了成像条件对HSI质量的影响。将基于led的HSI器件与参考高光谱相机系统进行了比较。结果:使用我们的原型HSI设备获得了不同成像目标的高光谱特征,与使用参考HSI系统获得的数据相当。结论:采用光谱LED阵列作为高速高质量HSI照明光源的可行性已经得到证明。在HSI中使用led可以为内窥镜、腹腔镜和手持式HSI设备的众多应用打开大门。
{"title":"LED-based, real-time, hyperspectral imaging device.","authors":"Naeeme Modir, Maysam Shahedi, James Dormer, Ling Ma, Baowei Fei","doi":"10.1117/1.JMI.12.3.035002","DOIUrl":"10.1117/1.JMI.12.3.035002","url":null,"abstract":"<p><strong>Purpose: </strong>This study demonstrates the feasibility of using an LED array for hyperspectral imaging (HSI). The prototype validates the concept and provides insights into the design of future HSI applications. Our goal is to design, develop, and test a real-time, LED-based HSI prototype as a proof-of-principle device for <i>in situ</i> hyperspectral imaging using LEDs.</p><p><strong>Approach: </strong>A prototype was designed based on a multiwavelength LED array and a monochrome camera and was tested to investigate the properties of the LED-based HSI. The LED array consisted of 18 LEDs in 18 different wavelengths from 405 nm to 910 nm. The performance of the imaging system was evaluated on different normal and cancerous <i>ex vivo</i> tissues. The impact of imaging conditions on the HSI quality was investigated. The LED-based HSI device was compared with a reference hyperspectral camera system.</p><p><strong>Results: </strong>The hyperspectral signatures of different imaging targets were acquired using our prototype HSI device, which are comparable to the data obtained using the reference HSI system.</p><p><strong>Conclusions: </strong>The feasibility of employing a spectral LED array as the illumination source for high-speed and high-quality HSI has been demonstrated. The use of LEDs for HSI can open the door to numerous applications in endoscopic, laparoscopic, and handheld HSI devices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 3","pages":"035002"},"PeriodicalIF":1.7,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1