{"title":"Introduction to the JMI Special Section on Computational Pathology.","authors":"Baowei Fei, Metin Nafi Gurcan, Yuankai Huo, Pinaki Sarder, Aaron Ward","doi":"10.1117/1.JMI.12.6.061401","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061401","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061401"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12705466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145776123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-03-12DOI: 10.1117/1.JMI.12.6.061402
Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong
Purpose: Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.
Approach: The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the "hidden class," and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).
Results: Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.
Conclusions: The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.
{"title":"HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection.","authors":"Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong","doi":"10.1117/1.JMI.12.6.061402","DOIUrl":"10.1117/1.JMI.12.6.061402","url":null,"abstract":"<p><strong>Purpose: </strong>Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.</p><p><strong>Approach: </strong>The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the \"hidden class,\" and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).</p><p><strong>Results: </strong>Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by <math><mrow><mo>∼</mo> <mn>17</mn> <mo>%</mo></mrow> </math> , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.</p><p><strong>Conclusions: </strong>The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061402"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11898109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.
Approach: We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.
Results: Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.
Conclusions: The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.
{"title":"Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis.","authors":"Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho","doi":"10.1117/1.JMI.12.S2.S22008","DOIUrl":"10.1117/1.JMI.12.S2.S22008","url":null,"abstract":"<p><strong>Purpose: </strong>Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.</p><p><strong>Approach: </strong>We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.</p><p><strong>Results: </strong>Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.</p><p><strong>Conclusions: </strong>The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22008"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-06-19DOI: 10.1117/1.JMI.12.6.061405
Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata
Purpose: We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.
Approach: We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.
Results: The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.
Conclusions: Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.
{"title":"Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation?","authors":"Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata","doi":"10.1117/1.JMI.12.6.061405","DOIUrl":"10.1117/1.JMI.12.6.061405","url":null,"abstract":"<p><strong>Purpose: </strong>We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.</p><p><strong>Approach: </strong>We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.</p><p><strong>Results: </strong>The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.</p><p><strong>Conclusions: </strong>Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061405"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-06-03DOI: 10.1117/1.JMI.12.6.061404
Ali Mammadov, Loïc Le Folgoc, Julien Adam, Anne Buronfosse, Gilles Hayem, Guillaume Hocquet, Pietro Gori
Purpose: Multiple instance learning (MIL) has emerged as the best solution for whole slide image (WSI) classification. It consists of dividing each slide into patches, which are treated as a bag of instances labeled with a global label. MIL includes two main approaches: instance-based and embedding-based. In the former, each patch is classified independently, and then, the patch scores are aggregated to predict the bag label. In the latter, bag classification is performed after aggregating patch embeddings. Even if instance-based methods are naturally more interpretable, embedding-based MILs have usually been preferred in the past due to their robustness to poor feature extractors. Recently, the quality of feature embeddings has drastically increased using self-supervised learning (SSL). Nevertheless, many authors continue to endorse the superiority of embedding-based MIL.
Approach: We conduct 710 experiments across 4 datasets, comparing 10 MIL strategies, 6 self-supervised methods with 4 backbones, 4 foundation models, and various pathology-adapted techniques. Furthermore, we introduce 4 instance-based MIL methods, never used before in the pathology domain.
Results: We show that with a good SSL feature extractor, simple instance-based MILs, with very few parameters, obtain similar or better performance than complex, state-of-the-art (SOTA) embedding-based MIL methods, setting new SOTA results on the BRACS and Camelyon16 datasets.
Conclusion: As simple instance-based MIL methods are naturally more interpretable and explainable to clinicians, our results suggest that more effort should be put into well-adapted SSL methods for WSI rather than into complex embedding-based MIL methods.
{"title":"Self-supervision enhances instance-based multiple instance learning methods in digital pathology: a benchmark study.","authors":"Ali Mammadov, Loïc Le Folgoc, Julien Adam, Anne Buronfosse, Gilles Hayem, Guillaume Hocquet, Pietro Gori","doi":"10.1117/1.JMI.12.6.061404","DOIUrl":"10.1117/1.JMI.12.6.061404","url":null,"abstract":"<p><strong>Purpose: </strong>Multiple instance learning (MIL) has emerged as the best solution for whole slide image (WSI) classification. It consists of dividing each slide into patches, which are treated as a bag of instances labeled with a global label. MIL includes two main approaches: instance-based and embedding-based. In the former, each patch is classified independently, and then, the patch scores are aggregated to predict the bag label. In the latter, bag classification is performed after aggregating patch embeddings. Even if instance-based methods are naturally more interpretable, embedding-based MILs have usually been preferred in the past due to their robustness to poor feature extractors. Recently, the quality of feature embeddings has drastically increased using self-supervised learning (SSL). Nevertheless, many authors continue to endorse the superiority of embedding-based MIL.</p><p><strong>Approach: </strong>We conduct 710 experiments across 4 datasets, comparing 10 MIL strategies, 6 self-supervised methods with 4 backbones, 4 foundation models, and various pathology-adapted techniques. Furthermore, we introduce 4 instance-based MIL methods, never used before in the pathology domain.</p><p><strong>Results: </strong>We show that with a good SSL feature extractor, simple instance-based MILs, with very few parameters, obtain similar or better performance than complex, state-of-the-art (SOTA) embedding-based MIL methods, setting new SOTA results on the BRACS and Camelyon16 datasets.</p><p><strong>Conclusion: </strong>As simple instance-based MIL methods are naturally more interpretable and explainable to clinicians, our results suggest that more effort should be put into well-adapted SSL methods for WSI rather than into complex embedding-based MIL methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061404"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-11-24DOI: 10.1117/1.JMI.12.6.064003
Mingzhe Hu, Shaoyan Pan, Chih-Wei Chang, Richard L J Qiu, Junbo Peng, Tonghe Wang, Justin Roper, Hui Mao, David Yu, Xiaofeng Yang
Purpose: We propose a deep learning framework, the cycle-guided denoising diffusion probability model (CG-DDPM), for cross-modality magnetic resonance imaging (MRI) synthesis. The CG-DDPM aims to generate high-quality MRIs of a target modality from an existing modality, addressing the challenge of missing MRI sequences in clinical practice.
Approach: The CG-DDPM employs two interconnected conditional diffusion probabilistic models, with a cycle-guided reverse latent noise regularization to enhance synthesis consistency and anatomical fidelity. The framework was evaluated using the BraTS2020 dataset, which includes three-dimensional brain MRIs with -weighted, -weighted, and FLAIR modalities. The synthetic images were quantitatively assessed using metrics such as multi-scale structural similarity measure (MSSIM), peak signal-to-noise ratio (PSNR), and mean absolute error (MAE). The CG-DDPM was benchmarked against state-of-the-art methods, including IDDPM, IDDIM, and MRI-cGAN.
Results: The CG-DDPM demonstrated superior performance across all cross-modality synthesis tasks (T1 → T2, T2 → T1, T1 → FLAIR, and FLAIR → T1). It consistently achieved the highest MSSIM values (ranging from 0.966 to 0.971), the lowest MAE (0.011 to 0.013), and competitive PSNR values (27.7 to 28.8 dB). Across all tasks, CG-DDPM outperformed IDDPM, IDDIM, and MRI-cGAN in most metrics and exhibited significantly lower uncertainty and inconsistency in MC-based sampling. Statistical analyses confirmed the robustness of CG-DDPM, with in key comparisons.
Conclusions: The proposed CG-DDPM provides a robust and efficient solution for cross-modality MRI synthesis, offering improved accuracy, stability, and clinical applicability compared with existing methods. This approach has the potential to streamline MRI-based workflows, enhance diagnostic imaging, and support precision treatment planning in medical physics and radiation oncology.
{"title":"Cross-modality 3D MRI synthesis via cycle-guided denoising diffusion probability model.","authors":"Mingzhe Hu, Shaoyan Pan, Chih-Wei Chang, Richard L J Qiu, Junbo Peng, Tonghe Wang, Justin Roper, Hui Mao, David Yu, Xiaofeng Yang","doi":"10.1117/1.JMI.12.6.064003","DOIUrl":"10.1117/1.JMI.12.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>We propose a deep learning framework, the cycle-guided denoising diffusion probability model (CG-DDPM), for cross-modality magnetic resonance imaging (MRI) synthesis. The CG-DDPM aims to generate high-quality MRIs of a target modality from an existing modality, addressing the challenge of missing MRI sequences in clinical practice.</p><p><strong>Approach: </strong>The CG-DDPM employs two interconnected conditional diffusion probabilistic models, with a cycle-guided reverse latent noise regularization to enhance synthesis consistency and anatomical fidelity. The framework was evaluated using the BraTS2020 dataset, which includes three-dimensional brain MRIs with <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted, <math><mrow><mi>T</mi> <mn>2</mn></mrow> </math> -weighted, and FLAIR modalities. The synthetic images were quantitatively assessed using metrics such as multi-scale structural similarity measure (MSSIM), peak signal-to-noise ratio (PSNR), and mean absolute error (MAE). The CG-DDPM was benchmarked against state-of-the-art methods, including IDDPM, IDDIM, and MRI-cGAN.</p><p><strong>Results: </strong>The CG-DDPM demonstrated superior performance across all cross-modality synthesis tasks (T1 → T2, T2 → T1, T1 → FLAIR, and FLAIR → T1). It consistently achieved the highest MSSIM values (ranging from 0.966 to 0.971), the lowest MAE (0.011 to 0.013), and competitive PSNR values (27.7 to 28.8 dB). Across all tasks, CG-DDPM outperformed IDDPM, IDDIM, and MRI-cGAN in most metrics and exhibited significantly lower uncertainty and inconsistency in MC-based sampling. Statistical analyses confirmed the robustness of CG-DDPM, with <math><mrow><mi>p</mi> <mrow><mtext>-</mtext></mrow> <mrow><mtext>values</mtext></mrow> <mo><</mo> <mn>0.05</mn></mrow> </math> in key comparisons.</p><p><strong>Conclusions: </strong>The proposed CG-DDPM provides a robust and efficient solution for cross-modality MRI synthesis, offering improved accuracy, stability, and clinical applicability compared with existing methods. This approach has the potential to streamline MRI-based workflows, enhance diagnostic imaging, and support precision treatment planning in medical physics and radiation oncology.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064003"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12643384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145606948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-12-18DOI: 10.1117/1.JMI.12.6.064006
Lucas W Remedios, Chloe Cho, Trent M Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R Krishnan, Adam M Saunders, Michael E Kim, Shunxing Bao, Alvin C Powers, Bennett A Landman, John Virostko
<p><strong>Purpose: </strong>Although elevated body mass index (BMI) is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that more detailed measurements of body composition may uncover abdominal phenotypes of type 2 diabetes. With artificial intelligence (AI) and computed tomography (CT), we can now leverage robust image segmentation to extract detailed measurements of size, shape, and tissue composition from abdominal organs, abdominal muscle, and abdominal fat depots in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data.</p><p><strong>Approach: </strong>We studied imaging records of 1728 de-identified patients from Vanderbilt University Medical Center with BMI collected from the electronic health record. To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>1728</mn></mrow> </math> ) and once on lean ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>497</mn></mrow> </math> ), overweight ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>611</mn></mrow> </math> ), and obese ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>620</mn></mrow> </math> ) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements, identifies which measurements most strongly predict type 2 diabetes and how they contribute to risk or protection, groups scans by shared model decision patterns, and links those decision patterns back to interpretable abdominal phenotypes in the original explainable measurement space of the abdomen using the following steps. (1) To capture abdominal composition: we represented each scan as a collection of 88 automatically extracted measurements of the size, shape, and fat content of abdominal structures using TotalSegmentator. (2) To learn key predictors: we trained a 10-fold cross-validated random forest classifier with SHapley Additive exPlanations (SHAP) analysis to rank features and estimate their risk-versus-protective effects for type 2 diabetes. (3) To validate individual effects: for the 20 highest-ranked features, we ran univariate logistic regressions to quantify their independent associations with type 2 diabetes. (4) To identify decision-making patterns: we embedded the top-20 SHAP profiles with uniform manifold approximation and projection and applied silhouette-guided K-means to cluster the random forest's decision space. (5) To link decisions to abdominal phenotypes: we fit one-versus-rest classifiers on the original anatomical measurements from each decision cluster and applied a second SHAP analysis to explore whether the random forest's logic had identified abdominal phenotypes.</p><p><strong>Results: </strong>Across the full, lean, overweight, and obese cohort
{"title":"Data-driven abdominal phenotypes of type 2 diabetes in lean, overweight, and obese cohorts from computed tomography.","authors":"Lucas W Remedios, Chloe Cho, Trent M Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R Krishnan, Adam M Saunders, Michael E Kim, Shunxing Bao, Alvin C Powers, Bennett A Landman, John Virostko","doi":"10.1117/1.JMI.12.6.064006","DOIUrl":"10.1117/1.JMI.12.6.064006","url":null,"abstract":"<p><strong>Purpose: </strong>Although elevated body mass index (BMI) is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that more detailed measurements of body composition may uncover abdominal phenotypes of type 2 diabetes. With artificial intelligence (AI) and computed tomography (CT), we can now leverage robust image segmentation to extract detailed measurements of size, shape, and tissue composition from abdominal organs, abdominal muscle, and abdominal fat depots in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data.</p><p><strong>Approach: </strong>We studied imaging records of 1728 de-identified patients from Vanderbilt University Medical Center with BMI collected from the electronic health record. To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>1728</mn></mrow> </math> ) and once on lean ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>497</mn></mrow> </math> ), overweight ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>611</mn></mrow> </math> ), and obese ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>620</mn></mrow> </math> ) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements, identifies which measurements most strongly predict type 2 diabetes and how they contribute to risk or protection, groups scans by shared model decision patterns, and links those decision patterns back to interpretable abdominal phenotypes in the original explainable measurement space of the abdomen using the following steps. (1) To capture abdominal composition: we represented each scan as a collection of 88 automatically extracted measurements of the size, shape, and fat content of abdominal structures using TotalSegmentator. (2) To learn key predictors: we trained a 10-fold cross-validated random forest classifier with SHapley Additive exPlanations (SHAP) analysis to rank features and estimate their risk-versus-protective effects for type 2 diabetes. (3) To validate individual effects: for the 20 highest-ranked features, we ran univariate logistic regressions to quantify their independent associations with type 2 diabetes. (4) To identify decision-making patterns: we embedded the top-20 SHAP profiles with uniform manifold approximation and projection and applied silhouette-guided K-means to cluster the random forest's decision space. (5) To link decisions to abdominal phenotypes: we fit one-versus-rest classifiers on the original anatomical measurements from each decision cluster and applied a second SHAP analysis to explore whether the random forest's logic had identified abdominal phenotypes.</p><p><strong>Results: </strong>Across the full, lean, overweight, and obese cohort","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064006"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12712129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145806053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-11-08DOI: 10.1117/1.JMI.12.6.064501
Garazi Casillas Martinez, Anthony Winder, Emma A M Stanley, Raissa Souza, Matthias Wilms, Myka Estes, Sarah J MacEachern, Nils D Forkert
Purpose: Autism is one of the most common neurodevelopmental conditions, and it is characterized by restricted, repetitive behaviors and social difficulties that affect daily functioning. It is challenging to provide an early and accurate diagnosis due to the wide diversity of symptoms and the developmental changes that occur during childhood. We evaluate the feasibility of an explainable deep learning (DL) model using structural MRI (sMRI) to identify meaningful brain biomarkers relevant to autism in children and thus support its diagnosis.
Approach: A total of 452 -weighted sMRI scans from children aged 9 to 11 years were obtained from the Autism Brain Imaging Data Exchange database. A DL model was trained to differentiate between autistic and typically developing children. Model explainability was assessed using saliency maps to identify key brain regions contributing to classification. Model performance was evaluated across 20 folds and compared with traditional machine learning models trained with regional volumetric features extracted from the sMRI scans.
Results: The model achieved a mean area under the receiver operating curve of 71.2%. The saliency maps highlighted brain regions that are known neuroanatomical and functional biomarkers associated with autism, such as the cuneus, pericalcarine, ventricles, lingual, vermal lobules, caudate, and thalamus.
Conclusions: We show the potential of interpretable DL models trained on sMRI data to aid in autism diagnosis within a narrowly defined pediatric age group. Our findings contribute to the field of explainable artificial intelligence methods in neurodevelopmental research and may help in clinical decision-making for autism and other neurodevelopmental conditions.
{"title":"Interpretable convolutional neural network for autism diagnosis support in children using structural magnetic resonance imaging datasets.","authors":"Garazi Casillas Martinez, Anthony Winder, Emma A M Stanley, Raissa Souza, Matthias Wilms, Myka Estes, Sarah J MacEachern, Nils D Forkert","doi":"10.1117/1.JMI.12.6.064501","DOIUrl":"10.1117/1.JMI.12.6.064501","url":null,"abstract":"<p><strong>Purpose: </strong>Autism is one of the most common neurodevelopmental conditions, and it is characterized by restricted, repetitive behaviors and social difficulties that affect daily functioning. It is challenging to provide an early and accurate diagnosis due to the wide diversity of symptoms and the developmental changes that occur during childhood. We evaluate the feasibility of an explainable deep learning (DL) model using structural MRI (sMRI) to identify meaningful brain biomarkers relevant to autism in children and thus support its diagnosis.</p><p><strong>Approach: </strong>A total of 452 <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted sMRI scans from children aged 9 to 11 years were obtained from the Autism Brain Imaging Data Exchange database. A DL model was trained to differentiate between autistic and typically developing children. Model explainability was assessed using saliency maps to identify key brain regions contributing to classification. Model performance was evaluated across 20 folds and compared with traditional machine learning models trained with regional volumetric features extracted from the sMRI scans.</p><p><strong>Results: </strong>The model achieved a mean area under the receiver operating curve of 71.2%. The saliency maps highlighted brain regions that are known neuroanatomical and functional biomarkers associated with autism, such as the cuneus, pericalcarine, ventricles, lingual, vermal lobules, caudate, and thalamus.</p><p><strong>Conclusions: </strong>We show the potential of interpretable DL models trained on sMRI data to aid in autism diagnosis within a narrowly defined pediatric age group. Our findings contribute to the field of explainable artificial intelligence methods in neurodevelopmental research and may help in clinical decision-making for autism and other neurodevelopmental conditions.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064501"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12596041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-11-13DOI: 10.1117/1.JMI.12.6.067001
Mary N Henderson, David B Jordan, Zong-Ming Li
Purpose: The purpose of this study was to assess the variations in shear-wave speed (SWS) in individual thenar muscles under varied pinch forces in healthy adults. It was hypothesized that (1) SWS would vary among the individual thenar muscles, and (2) there would be an increase in SWS with increased pinch force.
Approach: Thirteen healthy participants' dominant hands were imaged using an ultrasound probe aligned longitudinally along the muscle fibers of the abductor pollicis brevis (APB), opponens pollicis (OPP), and flexor pollicis brevis (FPB). The SWS of each muscle was derived. Each participant completed trials consisting of randomly ordered pinch forces at 0, 10, and 20 N, 10% of maximum pinch force (MPF), and 20% MPF.
Results: The SWSs varied significantly among individual thenar muscles ( ) under absolute ( ) and relative forces ( ). There was a significant increase in SWS as the force increased from 0 to 20 N in the APB ( ) and OPP ( ), and not in the FPB ( ). There was a significant increase in SWS as the force increased from 0 to 20% MPF in the APB ( ), and not in the OPP ( ) or the FPB ( ).
Conclusions: The SWS of the APB and OPP increased as force increased and was different among the thenar muscles. This suggests SWS evaluations may be an appropriate method for evaluating muscles under tension, or different voluntary force conditions, specifically for the APB and OPP muscles.
{"title":"Shear-wave elastography of healthy individual thenar muscles.","authors":"Mary N Henderson, David B Jordan, Zong-Ming Li","doi":"10.1117/1.JMI.12.6.067001","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.067001","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study was to assess the variations in shear-wave speed (SWS) in individual thenar muscles under varied pinch forces in healthy adults. It was hypothesized that (1) SWS would vary among the individual thenar muscles, and (2) there would be an increase in SWS with increased pinch force.</p><p><strong>Approach: </strong>Thirteen healthy participants' dominant hands were imaged using an ultrasound probe aligned longitudinally along the muscle fibers of the abductor pollicis brevis (APB), opponens pollicis (OPP), and flexor pollicis brevis (FPB). The SWS of each muscle was derived. Each participant completed trials consisting of randomly ordered pinch forces at 0, 10, and 20 N, 10% of maximum pinch force (MPF), and 20% MPF.</p><p><strong>Results: </strong>The SWSs varied significantly among individual thenar muscles ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ) under absolute ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ) and relative forces ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). There was a significant increase in SWS as the force increased from 0 to 20 N in the APB ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and OPP ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), and not in the FPB ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.873</mn></mrow> </math> ). There was a significant increase in SWS as the force increased from 0 to 20% MPF in the APB ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.005</mn></mrow> </math> ), and not in the OPP ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.586</mn></mrow> </math> ) or the FPB ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.984</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The SWS of the APB and OPP increased as force increased and was different among the thenar muscles. This suggests SWS evaluations may be an appropriate method for evaluating muscles under tension, or different voluntary force conditions, specifically for the APB and OPP muscles.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"067001"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12614004/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-12-05DOI: 10.1117/1.JMI.12.6.061411
Bulut Aygunes, Ramazan Gokberk Cinbis, Selim Aksoy
Purpose: Weakly supervised learning (WSL) is widely used for histopathological image analysis by modeling images as sets of fixed-size patches and utilizing image-level diagnoses as weak labels. However, in multiclass classification scenarios, patches corresponding to a wide spectrum of diagnostic categories can co-exist in a single image, complicating the learning process. We aim to address label uncertainty in such multiclass settings.
Approach: We propose a two-branch architecture and a complementary training strategy to improve patch-based WSL. One branch estimates patch-level class likelihoods, whereas the other predicts per-class patch relevance weights. These outputs are combined into image-level class predictions via a relevance-weighted sum of per-patch class likelihoods. To further improve performance, we introduce a multilabel augmentation strategy that forms new training samples by combining patch sets and labels from pairs of images, resulting in multilabel samples that enrich the training set by increasing the chance of having more patches that are relevant to the augmented label sets.
Results: We evaluate our method on two challenging multiclass breast histopathology datasets for region of interest classification. The proposed architecture and training strategy outperform conventional weakly supervised methods, demonstrating improved classification accuracy and robustness, particularly in underrepresented classes.
Conclusions: The proposed architecture effectively models the complex relationship between image-level labels and patch-level content in multiclass histopathological image analysis. Combined with the image-level multilabel augmentation strategy, it improves learning under label uncertainty. These contributions hold potential for more accurate and scalable diagnostic support systems in digital pathology.
{"title":"Patch relevance estimation and multilabel augmentation for weakly supervised histopathology image classification.","authors":"Bulut Aygunes, Ramazan Gokberk Cinbis, Selim Aksoy","doi":"10.1117/1.JMI.12.6.061411","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061411","url":null,"abstract":"<p><strong>Purpose: </strong>Weakly supervised learning (WSL) is widely used for histopathological image analysis by modeling images as sets of fixed-size patches and utilizing image-level diagnoses as weak labels. However, in multiclass classification scenarios, patches corresponding to a wide spectrum of diagnostic categories can co-exist in a single image, complicating the learning process. We aim to address label uncertainty in such multiclass settings.</p><p><strong>Approach: </strong>We propose a two-branch architecture and a complementary training strategy to improve patch-based WSL. One branch estimates patch-level class likelihoods, whereas the other predicts per-class patch relevance weights. These outputs are combined into image-level class predictions via a relevance-weighted sum of per-patch class likelihoods. To further improve performance, we introduce a multilabel augmentation strategy that forms new training samples by combining patch sets and labels from pairs of images, resulting in multilabel samples that enrich the training set by increasing the chance of having more patches that are relevant to the augmented label sets.</p><p><strong>Results: </strong>We evaluate our method on two challenging multiclass breast histopathology datasets for region of interest classification. The proposed architecture and training strategy outperform conventional weakly supervised methods, demonstrating improved classification accuracy and robustness, particularly in underrepresented classes.</p><p><strong>Conclusions: </strong>The proposed architecture effectively models the complex relationship between image-level labels and patch-level content in multiclass histopathological image analysis. Combined with the image-level multilabel augmentation strategy, it improves learning under label uncertainty. These contributions hold potential for more accurate and scalable diagnostic support systems in digital pathology.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061411"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12680080/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}