Purpose: Predictive models for contrast-enhanced mammography often perform better at detecting and classifying enhancing masses than (non-enhancing) microcalcification clusters. We aim to investigate whether incorporating synthetic data with simulated microcalcification clusters during training can enhance model performance.
Approach: Microcalcification clusters were simulated in low-energy images of lesion-free breasts from 782 patients, considering local texture features. Enhancement was simulated in the corresponding recombined images. A deep learning (DL) model for lesion detection and classification was trained with varying ratios of synthetic and real (850 patients) data. In addition, a handcrafted radiomics classifier was trained using delineations and class labels from real data, and predictions from both models were ensembled. Validation was performed on internal (212 patients) and external (279 patients) real datasets.
Results: The DL model trained exclusively with synthetic data detected over 60% of malignant lesions. Adding synthetic data to smaller real training sets improved detection sensitivity for malignant lesions but decreased precision. Performance plateaued at a detection sensitivity of 0.80. The ensembled DL and radiomics models performed worse than the standalone DL model, decreasing the area under this receiver operating characteristic curve from 0.75 to 0.60 on the external validation set, likely due to falsely detected suspicious regions of interest.
Conclusions: Synthetic data can enhance DL model performance, provided model setup and data distribution are optimized. The possibility to detect malignant lesions without real data present in the training set confirms the utility of synthetic data. It can serve as a helpful tool, especially when real data are scarce, and it is most effective when complementing real data.
Purpose: Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability.
Approach: We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( , ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes.
Results: LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ : , : , : , : ] and an AUC of for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ : 0.880, : 0.779, : 0.878, : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades.
Conclusions: Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.
Purpose: The purposes are to evaluate the change in mammographic density within individuals across screening rounds using automatic density software, to evaluate whether a change in breast density is associated with a future breast cancer diagnosis, and to provide insight into breast density evolution.
Approach: Mammographic breast density was analyzed in women screened in Malmö, Sweden, between 2010 and 2015 who had undergone at least two consecutive screening rounds months apart. The volumetric and area-based densities were measured with deep learning-based software and fully automated software, respectively. The change in volumetric breast density percentage (VBD%) between two consecutive screening examinations was determined. Multiple linear regression was used to investigate the association between VBD% change in percentage points and future breast cancer, as well as the initial VBD%, adjusting for age group and the time between examinations. Examinations with potential positioning issues were removed in a sensitivity analysis.
Results: In 26,056 included women, the mean VBD% decreased from 10.7% [95% confidence interval (CI) 10.6 to 10.8] to 10.3% (95% CI: 10.2 to 10.3) ( ) between the two examinations. The decline in VBD% was more pronounced in women with initially denser breasts (adjusted , ) and less pronounced in women with a future breast cancer diagnosis (adjusted , ).
Conclusions: The demonstrated density changes over time support the potential of using breast density change in risk assessment tools and provide insights for future risk-based screening.
Purpose: Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.
Approach: In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.
Results: Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.
Conclusions: Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.
Purpose: Intraoperative liver deformation and the need to glance repeatedly between the operative field and a remote monitor undermine the precision and workflow of image-guided liver surgery. Existing mixed reality (MR) prototypes address only isolated aspects of this challenge and lack quantitative validation in deformable anatomy.
Approach: We introduce a fully self-contained MR navigation system for liver surgery that runs on a MR headset and bridges this clinical gap by (1) stabilizing holographic content with an external retro-reflective reference tool that defines a fixed world origin, (2) tracking instruments and surface points in real time with the headset's depth camera, and (3) compensating soft-tissue deformation through a weighted ICP + linearized iterative boundary reconstruction pipeline. A lightweight server-client architecture streams deformation-corrected 3D models to the headset and enables hands-free control via voice commands.
Results: Validation on a multistate liver-phantom protocol demonstrated that the reference tool reduced mean hologram drift from to and improved tracking accuracy from to . Across five simulated deformation states, nonrigid registration lowered surface target registration error from to -an average 57% error reduction-yielding sub-4 mm guidance accuracy.
Conclusions: By unifying stable MR visualization, tool tracking, and biomechanical deformation correction in a single headset, the proposed platform eliminates monitor-related context switching and restores spatial fidelity lost to liver motion. The device-agnostic framework is extendable to open approaches and potentially laparoscopic workflows and other soft-tissue interventions, marking a significant step toward MR-enabled surgical navigation.
Purpose: We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.
Approach: To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.
Results: Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of and a landmark fitness (LM fitness) of . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to and increasing LM fitness to .
Conclusions: To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.
Purpose: The Bayesian ideal observer (IO) is a special model observer that achieves the best possible performance on tasks that involve signal detection or discrimination. Although IOs are desired for optimizing and assessing imaging technologies, they remain difficult to compute. Previously, a hybrid method that combines deep learning (DL) with a Markov-Chain Monte Carlo (MCMC) method was proposed for estimating the IO test statistic for joint signal detection-estimation tasks. That method will be referred to as the hybrid MCMC method. However, the hybrid MCMC method was restricted to use cases that involved relatively simple stochastic background and signal models.
Approach: The previously developed hybrid MCMC method is generalized by utilizing a framework that integrates deep generative modeling into the MCMC sampling process. This method employs a generative adversarial network (GAN) that is trained on object or signal ensembles to establish data-driven stochastic object and signal models, respectively, and will be referred to as the hybrid MCMC-GAN method. This circumvents the limitation of traditional MCMC methods and enables the estimation of the IO test statistic with consideration of broader classes of clinically relevant object and signal models.
Results: The hybrid MCMC-GAN method was evaluated on two binary detection-estimation tasks in which the observer must detect a signal and estimate its amplitude if the signal is detected. First, a stylized signal-known-statistically (SKS) and background-known-exactly task was considered. A GAN was employed to establish a stochastic signal model, enabling direct comparison of our GAN-based IO approximation with a closed-form expression for the IO decision strategy. The results confirmed that the proposed method could accurately approximate the performance of the true IO. Next, an SKS and background-known-statistically (BKS) task was considered. Here, a GAN was employed to establish a stochastic object model that described anatomical variability in an ensemble of magnetic resonance (MR) brain images. This represented a setting where traditional MCMC methods are inapplicable. In this study, although a reference estimate of the true IO performance was unavailable, the hybrid MCMC-GAN produced area under the estimation receiver operating characteristic curve (AEROC) estimates that exceeded those of a sub-ideal observer that represented a lower bound for the IO performance.
Conclusion: By combining GAN-based generative modeling with MCMC, the hybrid MCMC-GAN method extends a previously proposed IO approximation method to more general detection-estimation tasks. This provides a new capability to benchmark and optimize imaging-system performance through virtual imaging studies.
The editorial introduces JMI Volume 12 Issue 5, as it marks the beginning of a new academic year with a celebration of innovation, mentorship, and the evolving role of scholarly publishing in medical imaging. It emphasizes that impactful research is not just about results, but about teaching, sharing insights, and advancing the field through curiosity, collaboration, and community-driven resources.

