Pub Date : 2025-09-01Epub Date: 2025-09-04DOI: 10.1117/1.JMI.12.5.057501
Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo
Purpose: Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.
Approach: In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.
Results: Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.
Conclusions: Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.
{"title":"Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model.","authors":"Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.5.057501","DOIUrl":"10.1117/1.JMI.12.5.057501","url":null,"abstract":"<p><strong>Purpose: </strong>Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.</p><p><strong>Approach: </strong>In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.</p><p><strong>Results: </strong>Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.</p><p><strong>Conclusions: </strong>Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"057501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410749/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-10-06DOI: 10.1117/1.JMI.12.5.054004
Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier
Purpose: We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.
Approach: To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.
Results: Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of and a landmark fitness (LM fitness) of . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to and increasing LM fitness to .
Conclusions: To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.
{"title":"BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy.","authors":"Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier","doi":"10.1117/1.JMI.12.5.054004","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054004","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.</p><p><strong>Approach: </strong>To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.</p><p><strong>Results: </strong>Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of <math><mrow><mn>8.36</mn> <mo>±</mo> <mn>0.12</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and a landmark fitness (LM fitness) of <math><mrow><mn>85.71</mn> <mo>%</mo> <mo>±</mo> <mn>1.02</mn> <mo>%</mo></mrow> </math> . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to <math><mrow><mn>7.24</mn> <mo>±</mo> <mn>0.11</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and increasing LM fitness to <math><mrow><mn>93.90</mn> <mo>%</mo> <mo>±</mo> <mn>0.77</mn> <mo>%</mo></mrow> </math> .</p><p><strong>Conclusions: </strong>To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054004"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-10-21DOI: 10.1117/1.JMI.12.5.051810
Dan Li, Kaiyan Li, Weimin Zhou, Mark A Anastasio
Purpose: The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.
Approach: The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.
Results: The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.
Conclusion: By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.
{"title":"Approximating the ideal observer for joint signal detection and estimation tasks by the use of Markov-Chain Monte Carlo with generative adversarial networks.","authors":"Dan Li, Kaiyan Li, Weimin Zhou, Mark A Anastasio","doi":"10.1117/1.JMI.12.5.051810","DOIUrl":"10.1117/1.JMI.12.5.051810","url":null,"abstract":"<p><strong>Purpose: </strong>The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.</p><p><strong>Approach: </strong>The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.</p><p><strong>Results: </strong>The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.</p><p><strong>Conclusion: </strong>By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051810"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12539792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-10-24DOI: 10.1117/1.JMI.12.5.050101
Bennett A Landman
The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.
{"title":"Beyond the Victory Lap.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.12.5.050101","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.050101","url":null,"abstract":"<p><p>The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"050101"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12550604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-05-28DOI: 10.1117/1.JMI.12.5.051805
Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran
Purpose: The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.
Approach: We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.
Results: Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of CT image slices in a second.
Conclusions: The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.
{"title":"TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.","authors":"Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran","doi":"10.1117/1.JMI.12.5.051805","DOIUrl":"10.1117/1.JMI.12.5.051805","url":null,"abstract":"<p><strong>Purpose: </strong>The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.</p><p><strong>Approach: </strong>We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.</p><p><strong>Results: </strong>Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second.</p><p><strong>Conclusions: </strong>The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051805"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144182165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-09-17DOI: 10.1117/1.JMI.12.5.054001
Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch
Purpose: Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.
Approach: Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a "MultiAxial" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.
Results: The MultiAxial network achieved a test-set Dice scores of (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with for Multipriors and for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.
Conclusions: We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.
{"title":"Full-head segmentation of MRI with abnormal brain anatomy: model and data release.","authors":"Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch","doi":"10.1117/1.JMI.12.5.054001","DOIUrl":"10.1117/1.JMI.12.5.054001","url":null,"abstract":"<p><strong>Purpose: </strong>Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.</p><p><strong>Approach: </strong>Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a \"MultiAxial\" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.</p><p><strong>Results: </strong>The MultiAxial network achieved a test-set Dice scores of <math><mrow><mn>0.88</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> for Multipriors and <math><mrow><mn>0.79</mn> <mo>±</mo> <mn>0.10</mn></mrow> </math> for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.</p><p><strong>Conclusions: </strong>We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054001"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12442731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-10-18DOI: 10.1117/1.JMI.12.5.053502
Xiao Jiang, Grace J Gang, J Webster Stayman
Purpose: Medical implants, often made of dense materials, pose significant challenges to accurate computed tomography (CT) reconstruction, especially near implants due to beam hardening and partial-volume artifacts. Moreover, diagnostics involving implants often require separate visualization for implants and anatomy. In this work, we propose a approach for joint estimation of anatomy and implants as separate volumes using a mixed prior model.
Approach: We leverage a learning-based prior for anatomy and a sparsity prior for implants to decouple the two volumes. In addition, a hybrid mono-polyenergetic forward model is employed to accommodate the spectral effects of implants, and a multiresolution object model is used to achieve high-resolution implant reconstruction. The reconstruction process alternates between diffusion posterior sampling for anatomy updates and classic optimization for implants and spectral coefficients.
Results: Evaluations were performed on emulated cardiac imaging with stent and spine imaging with pedicle screws. The structures of the cardiac stent with 0.25 mm wires were clearly visualized in the implant images, whereas the blooming artifacts around the stent were effectively suppressed in the anatomical reconstruction. For pedicle screws, the proposed algorithm mitigated streaking and beam-hardening artifacts in the anatomy volume, demonstrating significant improvements in SSIM and PSNR compared with frequency-splitting metal artifact reduction and model-based reconstruction on slices containing implants.
Conclusion: The proposed mixed prior model coupled with a hybrid spectral and multiresolution model can help to separate spatially and spectrally distinct objects that differ from anatomical features in single-energy CT, improving both image quality and separate visualization of implants and anatomy.
{"title":"Joint CT reconstruction of anatomy and implants using a mixed prior model.","authors":"Xiao Jiang, Grace J Gang, J Webster Stayman","doi":"10.1117/1.JMI.12.5.053502","DOIUrl":"10.1117/1.JMI.12.5.053502","url":null,"abstract":"<p><strong>Purpose: </strong>Medical implants, often made of dense materials, pose significant challenges to accurate computed tomography (CT) reconstruction, especially near implants due to beam hardening and partial-volume artifacts. Moreover, diagnostics involving implants often require separate visualization for implants and anatomy. In this work, we propose a approach for joint estimation of anatomy and implants as separate volumes using a mixed prior model.</p><p><strong>Approach: </strong>We leverage a learning-based prior for anatomy and a sparsity prior for implants to decouple the two volumes. In addition, a hybrid mono-polyenergetic forward model is employed to accommodate the spectral effects of implants, and a multiresolution object model is used to achieve high-resolution implant reconstruction. The reconstruction process alternates between diffusion posterior sampling for anatomy updates and classic optimization for implants and spectral coefficients.</p><p><strong>Results: </strong>Evaluations were performed on emulated cardiac imaging with stent and spine imaging with pedicle screws. The structures of the cardiac stent with 0.25 mm wires were clearly visualized in the implant images, whereas the blooming artifacts around the stent were effectively suppressed in the anatomical reconstruction. For pedicle screws, the proposed algorithm mitigated streaking and beam-hardening artifacts in the anatomy volume, demonstrating significant improvements in SSIM and PSNR compared with frequency-splitting metal artifact reduction and model-based reconstruction on slices containing implants.</p><p><strong>Conclusion: </strong>The proposed mixed prior model coupled with a hybrid spectral and multiresolution model can help to separate spatially and spectrally distinct objects that differ from anatomical features in single-energy CT, improving both image quality and separate visualization of implants and anatomy.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"053502"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12537543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-09-17DOI: 10.1117/1.JMI.12.5.054501
Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz
Purpose: Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.
Approach: We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.
Results: Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.
Conclusions: Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.
{"title":"Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging.","authors":"Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz","doi":"10.1117/1.JMI.12.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054501","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.</p><p><strong>Approach: </strong>We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.</p><p><strong>Results: </strong>Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.</p><p><strong>Conclusions: </strong>Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-04-11DOI: 10.1117/1.JMI.12.5.051803
Michelle C Pryde, James Rioux, Adela Elena Cora, David Volders, Matthias H Schmidt, Mohammed Abdolell, Chris Bowen, Steven D Beyea
Purpose: Objective image quality metrics (IQMs) are widely used as outcome measures to assess acquisition and reconstruction strategies for diagnostic images. For nonpathological magnetic resonance (MR) images, these IQMs correlate to varying degrees with expert radiologists' confidence scores of overall perceived diagnostic image quality. However, it is unclear whether IQMs also correlate with task-specific diagnostic image quality or expert radiologists' confidence in performing a specific diagnostic task, which calls into question their use as surrogates for radiologist opinion.
Approach: 0.5 T MR images from 16 stroke patients and two healthy volunteers were retrospectively undersampled ( to ) and reconstructed via compressed sensing. Three neuroradiologists reported the presence/absence of acute ischemic stroke (AIS) and assigned a Fazekas score describing the extent of chronic ischemic lesion burden. Neuroradiologists ranked their confidence in performing each task using a 1 to 5 Likert scale. Confidence scores were correlated with noise quality measure, the visual information fidelity criterion, the feature similarity index, root mean square error, and structural similarity (SSIM) via nonlinear regression modeling.
Results: Although acceleration alters image quality, neuroradiologists remain able to report pathology. All of the IQMs tested correlated to some degree with diagnostic confidence for assessing chronic ischemic lesion burden, but none correlated with diagnostic confidence in diagnosing the presence/absence of AIS due to consistent radiologist performance regardless of image degradation.
Conclusions: Accelerated images were helpful for understanding the ability of IQMs to assess task-specific diagnostic image quality in the context of chronic ischemic lesion burden, although not in the case of AIS diagnosis. These findings suggest that commonly used IQMs, such as the SSIM index, do not necessarily indicate an image's utility when performing certain diagnostic tasks.
目的:客观图像质量指标(IQMs)被广泛用于评估诊断图像的采集和重建策略。对于非病理性磁共振(MR)图像,这些iqm与放射科专家对整体感知诊断图像质量的信心得分有不同程度的相关性。然而,目前尚不清楚iqm是否也与特定任务的诊断图像质量或专家放射科医生在执行特定诊断任务时的信心相关,这就使他们作为放射科医生意见的替代品的使用受到质疑。方法:对16例脑卒中患者和2名健康志愿者的0.5 T MR图像进行回顾性欠采样(R = 1 ~ 7 ×),并通过压缩感知进行重构。三名神经放射学家报告了急性缺血性卒中(AIS)的存在/不存在,并分配了描述慢性缺血性病变负担程度的Fazekas评分。神经放射学家用1到5的李克特量表对他们完成每项任务的信心进行排名。通过非线性回归建模,置信度得分与噪声质量度量、视觉信息保真度标准、特征相似度指数、均方根误差和结构相似度(SSIM)相关。结果:虽然加速改变图像质量,神经放射科医生仍然能够报告病理。所有测试的iqm都在一定程度上与评估慢性缺血性病变负担的诊断信心相关,但没有一个与诊断AIS存在与否的诊断信心相关,因为无论图像退化如何,放射科医生的表现都是一致的。结论:加速图像有助于理解IQMs在慢性缺血性病变负担背景下评估特定任务诊断图像质量的能力,尽管在AIS诊断情况下并非如此。这些发现表明,在执行某些诊断任务时,常用的iqm(如SSIM索引)不一定表明映像的实用性。
{"title":"Correlation of objective image quality metrics with radiologists' diagnostic confidence depends on the clinical task performed.","authors":"Michelle C Pryde, James Rioux, Adela Elena Cora, David Volders, Matthias H Schmidt, Mohammed Abdolell, Chris Bowen, Steven D Beyea","doi":"10.1117/1.JMI.12.5.051803","DOIUrl":"10.1117/1.JMI.12.5.051803","url":null,"abstract":"<p><strong>Purpose: </strong>Objective image quality metrics (IQMs) are widely used as outcome measures to assess acquisition and reconstruction strategies for diagnostic images. For nonpathological magnetic resonance (MR) images, these IQMs correlate to varying degrees with expert radiologists' confidence scores of overall perceived diagnostic image quality. However, it is unclear whether IQMs also correlate with task-specific diagnostic image quality or expert radiologists' confidence in performing a specific diagnostic task, which calls into question their use as surrogates for radiologist opinion.</p><p><strong>Approach: </strong>0.5 T MR images from 16 stroke patients and two healthy volunteers were retrospectively undersampled ( <math><mrow><mi>R</mi> <mo>=</mo> <mn>1</mn></mrow> </math> to <math><mrow><mn>7</mn> <mo>×</mo></mrow> </math> ) and reconstructed via compressed sensing. Three neuroradiologists reported the presence/absence of acute ischemic stroke (AIS) and assigned a Fazekas score describing the extent of chronic ischemic lesion burden. Neuroradiologists ranked their confidence in performing each task using a 1 to 5 Likert scale. Confidence scores were correlated with noise quality measure, the visual information fidelity criterion, the feature similarity index, root mean square error, and structural similarity (SSIM) via nonlinear regression modeling.</p><p><strong>Results: </strong>Although acceleration alters image quality, neuroradiologists remain able to report pathology. All of the IQMs tested correlated to some degree with diagnostic confidence for assessing chronic ischemic lesion burden, but none correlated with diagnostic confidence in diagnosing the presence/absence of AIS due to consistent radiologist performance regardless of image degradation.</p><p><strong>Conclusions: </strong>Accelerated images were helpful for understanding the ability of IQMs to assess task-specific diagnostic image quality in the context of chronic ischemic lesion burden, although not in the case of AIS diagnosis. These findings suggest that commonly used IQMs, such as the SSIM index, do not necessarily indicate an image's utility when performing certain diagnostic tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051803"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11991859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-18DOI: 10.1117/1.JMI.12.5.051806
Alisa Mohebbi, Ali Abdi, Saeed Mohammadzadeh, Mohammad Mirza-Aghazadeh-Attari, Ali Abbasian Ardakani, Afshin Mohammadi
Purpose: Our purpose is to assess the inter-rater agreement between digital mammography (DM) and contrast-enhanced spectral mammography (CESM) in evaluating the Breast Imaging Reporting and Data System (BI-RADS) grading.
Approach: This retrospective study included 326 patients recruited between January 2019 and February 2021. The study protocol was pre-registered on the Open Science Framework platform. Two expert radiologists interpreted the CESM and DM findings. Pathological data are used for radiologically suspicious or malignant-appearing lesions, whereas follow-up was considered the gold standard for benign-appearing lesions and breasts without lesions.
Results: For intra-device agreement, both imaging modalities showed "almost perfect" agreement, indicating that different radiologists are expected to report the same BI-RADS score for the same image. Despite showing a similar interpretation, a paired -test showed significantly higher agreement for CESM compared with DM ( ). Subgrouping based on the side or view did not show a considerable difference for both imaging modalities. For inter-device agreement, "almost perfect" agreement was also achieved. However, for proven malignant lesions, an overall higher BI-RADS score was achieved for CESM, whereas for benign or normal breasts, a lower BI-RADS score was reported, indicating a more precise BI-RADS classification for CESM compared with DM.
Conclusions: Our findings demonstrated strong agreement among readers regarding the identification of DM and CESM findings in breast images from various views. Moreover, it indicates that CESM is equally precise compared with DM and can be used as an alternative in clinical centers.
{"title":"Contrast-enhanced spectral mammography demonstrates better inter-reader repeatability than digital mammography for screening breast cancer patients.","authors":"Alisa Mohebbi, Ali Abdi, Saeed Mohammadzadeh, Mohammad Mirza-Aghazadeh-Attari, Ali Abbasian Ardakani, Afshin Mohammadi","doi":"10.1117/1.JMI.12.5.051806","DOIUrl":"10.1117/1.JMI.12.5.051806","url":null,"abstract":"<p><strong>Purpose: </strong>Our purpose is to assess the inter-rater agreement between digital mammography (DM) and contrast-enhanced spectral mammography (CESM) in evaluating the Breast Imaging Reporting and Data System (BI-RADS) grading.</p><p><strong>Approach: </strong>This retrospective study included 326 patients recruited between January 2019 and February 2021. The study protocol was pre-registered on the Open Science Framework platform. Two expert radiologists interpreted the CESM and DM findings. Pathological data are used for radiologically suspicious or malignant-appearing lesions, whereas follow-up was considered the gold standard for benign-appearing lesions and breasts without lesions.</p><p><strong>Results: </strong>For intra-device agreement, both imaging modalities showed \"almost perfect\" agreement, indicating that different radiologists are expected to report the same BI-RADS score for the same image. Despite showing a similar interpretation, a paired <math><mrow><mi>t</mi></mrow> </math> -test showed significantly higher agreement for CESM compared with DM ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ). Subgrouping based on the side or view did not show a considerable difference for both imaging modalities. For inter-device agreement, \"almost perfect\" agreement was also achieved. However, for proven malignant lesions, an overall higher BI-RADS score was achieved for CESM, whereas for benign or normal breasts, a lower BI-RADS score was reported, indicating a more precise BI-RADS classification for CESM compared with DM.</p><p><strong>Conclusions: </strong>Our findings demonstrated strong agreement among readers regarding the identification of DM and CESM findings in breast images from various views. Moreover, it indicates that CESM is equally precise compared with DM and can be used as an alternative in clinical centers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051806"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}