Pub Date : 2026-01-01Epub Date: 2026-02-02DOI: 10.1117/1.JMI.13.1.013501
Fong Chi Ho, William Paul Segars, Ehsan Samei, Ehsan Abadi
Purpose: Accurate airway measurement is critical for bronchitis quantification with computed tomography (CT), yet optimal protocols and the added value of photon-counting CT (PCCT) over energy-integrating CT (EICT) for reducing bias remain unclear. We quantified biomarker accuracy across modalities and protocols and assessed strategies to reduce bias.
Approach: A virtual imaging trial with 20 bronchitis anthropomorphic models was scanned using a validated simulator for two systems (EICT: SOMATOM Flash; PCCT: NAEOTOM Alpha) at 6.3 and 12.6 mGy. Reconstructions varied algorithm, kernel sharpness, slice thickness, and pixel size. Pi10 (square-root wall thickness at 10-mm perimeter) and WA% (wall-area percentage) were compared against ground-truth airway dimensions obtained from the 0.1-mm-precision anatomical models prior to CT simulation. External validation used clinical PCCT ( ) and EICT ( ).
Results: Simulated airway dimensions agreed with pathological references ( ). PCCT had lower errors than EICT across segmented generations ( ). Under optimal parameters, PCCT improved Pi10 and WA% accuracy by 26.3% and 64.9%. Across the tested PCCT and EICT imaging protocols, improvements were associated with sharper kernels (25.8% Pi10, 33.0% WA%), thinner slices (23.9% Pi10, 49.8% WA%), smaller pixels (17.0% Pi10, 23.1% WA%), and higher dose ( ). Clinically, PCCT achieved higher maximum airway generation ( versus ) and lower variability, mirroring trends in virtual results.
Conclusions: PCCT improves the accuracy and consistency of airway biomarker quantification relative to EICT, particularly with optimized protocols. The validated virtual platform enables modality-bias assessment and protocol optimization for accurate, reproducible bronchitis measurements.
{"title":"Airway quantifications of bronchitis patients with photon-counting and energy-integrating computed tomography.","authors":"Fong Chi Ho, William Paul Segars, Ehsan Samei, Ehsan Abadi","doi":"10.1117/1.JMI.13.1.013501","DOIUrl":"10.1117/1.JMI.13.1.013501","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate airway measurement is critical for bronchitis quantification with computed tomography (CT), yet optimal protocols and the added value of photon-counting CT (PCCT) over energy-integrating CT (EICT) for reducing bias remain unclear. We quantified biomarker accuracy across modalities and protocols and assessed strategies to reduce bias.</p><p><strong>Approach: </strong>A virtual imaging trial with 20 bronchitis anthropomorphic models was scanned using a validated simulator for two systems (EICT: SOMATOM Flash; PCCT: NAEOTOM Alpha) at 6.3 and 12.6 mGy. Reconstructions varied algorithm, kernel sharpness, slice thickness, and pixel size. Pi10 (square-root wall thickness at 10-mm perimeter) and WA% (wall-area percentage) were compared against ground-truth airway dimensions obtained from the 0.1-mm-precision anatomical models prior to CT simulation. External validation used clinical PCCT ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>22</mn></mrow> </math> ) and EICT ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>80</mn></mrow> </math> ).</p><p><strong>Results: </strong>Simulated airway dimensions agreed with pathological references ( <math><mrow><mi>R</mi> <mo>=</mo> <mn>0.89</mn> <mo>-</mo> <mn>0.93</mn></mrow> </math> ). PCCT had lower errors than EICT across segmented generations ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). Under optimal parameters, PCCT improved Pi10 and WA% accuracy by 26.3% and 64.9%. Across the tested PCCT and EICT imaging protocols, improvements were associated with sharper kernels (25.8% Pi10, 33.0% WA%), thinner slices (23.9% Pi10, 49.8% WA%), smaller pixels (17.0% Pi10, 23.1% WA%), and higher dose ( <math><mrow><mo>≤</mo> <mn>3.9</mn> <mo>%</mo></mrow> </math> ). Clinically, PCCT achieved higher maximum airway generation ( <math><mrow><mn>8.8</mn> <mo>±</mo> <mn>0.5</mn></mrow> </math> versus <math><mrow><mn>6.0</mn> <mo>±</mo> <mn>1.1</mn></mrow> </math> ) and lower variability, mirroring trends in virtual results.</p><p><strong>Conclusions: </strong>PCCT improves the accuracy and consistency of airway biomarker quantification relative to EICT, particularly with optimized protocols. The validated virtual platform enables modality-bias assessment and protocol optimization for accurate, reproducible bronchitis measurements.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"013501"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12863983/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-06DOI: 10.1117/1.JMI.13.1.014502
Rishi Agrawal, Neeraj Gupta, Anand Singh Jalal
Purpose: Research in deep learning has shown a great advancement in the detection of melanoma. However, recent literature has emphasized a tendency of certain models to rely on disease-irrelevant visual artifacts such as dark corners, dense hair, or ruler marks. The dependence on these markers leads to biased models that do well for training but generalize poorly to heterogeneous clinical environments. To address these limitations in developing reliability in skin lesion detection, a lightweight cross-attention-based semantic dual (LCSD) transformer model was proposed.
Approach: The LCSD model extracts global-level semantic information, uses feature normalization to improve model accuracy, and employs semantic queries to improve domain generalization. Multihead attention is included with the semantic queries to refine global features. The cross-attention between feature maps and semantic query provides the model with a generalized encoding of the global context. The model improved the computational complexity from to , which makes the model suitable for the development of real-time and mobile applications.
Results: Empirical evaluation was conducted on three challenging datasets: Derm7pt-Dermoscopic, Derm7pt-Clinical, and PAD-UFES-20. The proposed model achieved classification accuracies of 82.82%, 72.95%, and 86.21%, respectively. These results demonstrate superior performance compared with conventional transformer-based models, highlighting both improved robustness and reduced computational cost.
Conclusion: The LCSD model mitigates the influence of irrelevant visual characteristics, enhances domain generalization, and ensures better adaptability across diverse clinical scenarios. Its lightweight design further supports deployment in mobile applications, making it a reliable and efficient solution for real-world melanoma detection.
目的:深度学习的研究在黑色素瘤的检测方面取得了很大的进展。然而,最近的文献强调,某些模型倾向于依赖与疾病无关的视觉人工制品,如黑暗的角落,浓密的头发,或标尺标记。对这些标记的依赖导致有偏见的模型在训练中表现良好,但在异质临床环境中泛化能力差。为了解决这些限制在开发可靠性的皮肤损伤检测,提出了一个轻量级的基于交叉注意的语义对偶(LCSD)变压器模型。方法:LCSD模型提取全局语义信息,使用特征归一化来提高模型精度,使用语义查询来提高领域泛化。语义查询中包含多头注意,以细化全局特征。特征映射和语义查询之间的交叉关注为模型提供了全局上下文的通用编码。该模型将计算复杂度从0 (n²d)提高到0 (n²d + m²d),适合实时和移动应用的开发。结果:对三个具有挑战性的数据集:Derm7pt-Dermoscopic、Derm7pt-Clinical和pad - upes -20进行了实证评估。该模型的分类准确率分别为82.82%、72.95%和86.21%。与传统的基于变压器的模型相比,这些结果显示了优越的性能,突出了增强的鲁棒性和降低的计算成本。结论:LCSD模型减轻了不相关视觉特征的影响,增强了领域泛化,确保了对不同临床场景更好的适应性。其轻量级设计进一步支持移动应用程序的部署,使其成为现实世界黑色素瘤检测的可靠和高效的解决方案。
{"title":"LCSD-Net: a light-weight cross-attention-based semantic dual transformer for domain generalization in melanoma detection.","authors":"Rishi Agrawal, Neeraj Gupta, Anand Singh Jalal","doi":"10.1117/1.JMI.13.1.014502","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.014502","url":null,"abstract":"<p><strong>Purpose: </strong>Research in deep learning has shown a great advancement in the detection of melanoma. However, recent literature has emphasized a tendency of certain models to rely on disease-irrelevant visual artifacts such as dark corners, dense hair, or ruler marks. The dependence on these markers leads to biased models that do well for training but generalize poorly to heterogeneous clinical environments. To address these limitations in developing reliability in skin lesion detection, a lightweight cross-attention-based semantic dual (LCSD) transformer model was proposed.</p><p><strong>Approach: </strong>The LCSD model extracts global-level semantic information, uses feature normalization to improve model accuracy, and employs semantic queries to improve domain generalization. Multihead attention is included with the semantic queries to refine global features. The cross-attention between feature maps and semantic query provides the model with a generalized encoding of the global context. The model improved the computational complexity from <math><mrow><mi>O</mi> <mo>(</mo> <msup><mrow><mi>n</mi></mrow> <mrow><mn>2</mn></mrow> </msup> <mi>d</mi> <mo>)</mo></mrow> </math> to <math><mrow><mi>O</mi> <mo>(</mo> <mi>n</mi> <mi>m</mi> <mi>d</mi> <mo>+</mo> <msup><mrow><mi>m</mi></mrow> <mrow><mn>2</mn></mrow> </msup> <mi>d</mi> <mo>)</mo></mrow> </math> , which makes the model suitable for the development of real-time and mobile applications.</p><p><strong>Results: </strong>Empirical evaluation was conducted on three challenging datasets: Derm7pt-Dermoscopic, Derm7pt-Clinical, and PAD-UFES-20. The proposed model achieved classification accuracies of 82.82%, 72.95%, and 86.21%, respectively. These results demonstrate superior performance compared with conventional transformer-based models, highlighting both improved robustness and reduced computational cost.</p><p><strong>Conclusion: </strong>The LCSD model mitigates the influence of irrelevant visual characteristics, enhances domain generalization, and ensures better adaptability across diverse clinical scenarios. Its lightweight design further supports deployment in mobile applications, making it a reliable and efficient solution for real-world melanoma detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014502"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12773922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-13DOI: 10.1117/1.JMI.13.1.014505
Mane Piliposyan, Jacob J Peoples, Mohammad Hamghalam, Ramtin Mojtahedi, Kaitlyn Kobayashi, E Claire Bunker, Natalie Gangai, Hyunseon C Kang, Yun Shin Chun, Christian Muise, Richard K G Do, Amber L Simpson
Purpose: Colorectal cancer is the third most common cancer globally, with a high mortality rate due to metastatic progression, particularly in the liver. Surgical resection remains the main curative treatment, but only a small subset of patients is eligible for surgery at diagnosis. For patients with initially unresectable colorectal liver metastases (CRLM), neoadjuvant chemotherapy can downstage tumors, potentially making surgery feasible. We investigate whether radiomic signatures-quantitative imaging biomarkers derived from baseline computed tomography (CT) scans-can noninvasively predict chemotherapy response in patients with unresectable CRLM, offering a pathway toward personalized treatment planning.
Approach: We used radiomics combined with a stacking classifier (SC) to predict treatment outcome. Baseline CT imaging data from 355 patients with initially unresectable CRLM were analyzed using two regions of interest (ROIs) separately (all tumors in the liver and the largest tumor by volume). From each ROI, 107 radiomic features were extracted. The dataset was split into training and testing sets, and multiple machine learning models were trained and integrated via stacking to enhance prediction. Logistic regression coefficients were used to derive radiomic signatures.
Results: The SC achieved strong predictive performance, with an area under the receiver operating characteristic curve of up to 0.77 for response prediction. Logistic regression identified 12 and 7 predictive features for treatment response in all tumors and the largest tumor ROIs, respectively.
Conclusion: Our findings demonstrate that radiomic features from baseline CT scans can serve as robust, interpretable biomarkers for predicting chemotherapy response, offering insights to guide personalized treatment in unresectable CRLM.
{"title":"Radiomic signatures from baseline CT predict chemotherapy response in unresectable colorectal liver metastases.","authors":"Mane Piliposyan, Jacob J Peoples, Mohammad Hamghalam, Ramtin Mojtahedi, Kaitlyn Kobayashi, E Claire Bunker, Natalie Gangai, Hyunseon C Kang, Yun Shin Chun, Christian Muise, Richard K G Do, Amber L Simpson","doi":"10.1117/1.JMI.13.1.014505","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.014505","url":null,"abstract":"<p><strong>Purpose: </strong>Colorectal cancer is the third most common cancer globally, with a high mortality rate due to metastatic progression, particularly in the liver. Surgical resection remains the main curative treatment, but only a small subset of patients is eligible for surgery at diagnosis. For patients with initially unresectable colorectal liver metastases (CRLM), neoadjuvant chemotherapy can downstage tumors, potentially making surgery feasible. We investigate whether radiomic signatures-quantitative imaging biomarkers derived from baseline computed tomography (CT) scans-can noninvasively predict chemotherapy response in patients with unresectable CRLM, offering a pathway toward personalized treatment planning.</p><p><strong>Approach: </strong>We used radiomics combined with a stacking classifier (SC) to predict treatment outcome. Baseline CT imaging data from 355 patients with initially unresectable CRLM were analyzed using two regions of interest (ROIs) separately (all tumors in the liver and the largest tumor by volume). From each ROI, 107 radiomic features were extracted. The dataset was split into training and testing sets, and multiple machine learning models were trained and integrated via stacking to enhance prediction. Logistic regression coefficients were used to derive radiomic signatures.</p><p><strong>Results: </strong>The SC achieved strong predictive performance, with an area under the receiver operating characteristic curve of up to 0.77 for response prediction. Logistic regression identified 12 and 7 predictive features for treatment response in all tumors and the largest tumor ROIs, respectively.</p><p><strong>Conclusion: </strong>Our findings demonstrate that radiomic features from baseline CT scans can serve as robust, interpretable biomarkers for predicting chemotherapy response, offering insights to guide personalized treatment in unresectable CRLM.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014505"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12797257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-29DOI: 10.1117/1.JMI.13.1.014005
Kazi Md Farhad Mahmud, Ahmad Qasem, Joshua M Staley, Rachel Yoder, Allison Aripoli, Shane R Stecklein, Priyanka Sharma, Zhiguo Zhou
Purpose: Triple-negative breast cancer (TNBC) is an aggressive subtype with limited treatment options and high recurrence rates. Magnetic resonance imaging (MRI) is widely used for tumor assessment, but manual segmentation is labor-intensive and variable. Existing deep learning methods often lack generalizability, calibrated confidence, and robust uncertainty quantification.
Approach: We propose ER2Net, an evidential reasoning-enabled neural network for reliable TNBC tumor segmentation on MRI. ER2Net trains multiple U-Net variants with dropouts to generate diverse predictions and introduces pixel-wise reliability to quantify model agreement. We then introduce two ensemble fusion techniques: weighted reliability (WR) segmentation, which leverages pixel-wise reliability to enhance sensitivity, and Bayesian fusion (BF), which integrates predictions probabilistically for robust consensus. Confidence calibration is achieved using evidential reasoning, and we further propose pixel-wise reliable confidence entropy (PWRE) as a uncertainty measure.
Results: ER2Net improved performance compared with individual models. WR achieved IoU = 0.886, sensitivity = 0.928, precision = 0.952, and Hausdorff distance = 5.429 mm, whereas BF achieved IoU = 0.885 and sensitivity = 0.929. Reliable fusion provided the best calibration [expected calibration error = 0.00003; maximum calibration error = 0.017]. PWRE produced lower variance than conventional entropy, yielding more stable uncertainty estimates.
Conclusion: ER2Net introduces WR segmentation and BF as enhanced fusion techniques and PWRE as a uncertainty metric. Together, these advances improve segmentation accuracy, sensitivity, confidence calibration, and uncertainty estimation, paving the way for reliable MRI-based tools to support personalized treatment planning and response assessment in TNBC.
{"title":"ER<sup>2</sup>Net: an evidential reasoning rule-enabled neural network for reliable triple-negative breast cancer tumor segmentation in magnetic resonance imaging.","authors":"Kazi Md Farhad Mahmud, Ahmad Qasem, Joshua M Staley, Rachel Yoder, Allison Aripoli, Shane R Stecklein, Priyanka Sharma, Zhiguo Zhou","doi":"10.1117/1.JMI.13.1.014005","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.014005","url":null,"abstract":"<p><strong>Purpose: </strong>Triple-negative breast cancer (TNBC) is an aggressive subtype with limited treatment options and high recurrence rates. Magnetic resonance imaging (MRI) is widely used for tumor assessment, but manual segmentation is labor-intensive and variable. Existing deep learning methods often lack generalizability, calibrated confidence, and robust uncertainty quantification.</p><p><strong>Approach: </strong>We propose ER<sup>2</sup>Net, an evidential reasoning-enabled neural network for reliable TNBC tumor segmentation on MRI. ER<sup>2</sup>Net trains multiple U-Net variants with dropouts to generate diverse predictions and introduces pixel-wise reliability to quantify model agreement. We then introduce two ensemble fusion techniques: weighted reliability (WR) segmentation, which leverages pixel-wise reliability to enhance sensitivity, and Bayesian fusion (BF), which integrates predictions probabilistically for robust consensus. Confidence calibration is achieved using evidential reasoning, and we further propose pixel-wise reliable confidence entropy (PWRE) as a uncertainty measure.</p><p><strong>Results: </strong>ER<sup>2</sup>Net improved performance compared with individual models. WR achieved IoU = 0.886, sensitivity = 0.928, precision = 0.952, and Hausdorff distance = 5.429 mm, whereas BF achieved IoU = 0.885 and sensitivity = 0.929. Reliable fusion provided the best calibration [expected calibration error = 0.00003; maximum calibration error = 0.017]. PWRE produced lower variance than conventional entropy, yielding more stable uncertainty estimates.</p><p><strong>Conclusion: </strong>ER<sup>2</sup>Net introduces WR segmentation and BF as enhanced fusion techniques and PWRE as a uncertainty metric. Together, these advances improve segmentation accuracy, sensitivity, confidence calibration, and uncertainty estimation, paving the way for reliable MRI-based tools to support personalized treatment planning and response assessment in TNBC.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014005"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12853374/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-13DOI: 10.1117/1.JMI.13.1.014504
Lei Xu, Bin Zhang, Xingyuan Li, Guona Zheng, Yong Wang, YanHui Peng
Purpose: Papillary thyroid carcinoma (PTC) is a common thyroid cancer, and accurate preoperative assessment of lateral cervical lymph node metastasis is critical for surgical planning. Current methods are often subjective and prone to misdiagnosis. This study aims to improve the accuracy of metastasis evaluation using a deep learning-based segmentation method on enhanced computed tomography (CT) images.
Approach: We propose a YOLOv8-based deep learning model integrated with a deformable self-attention module to enhance metastatic lymph node segmentation. The model was trained on a large dataset of pathology-confirmed CT images from PTC patients.
Results: The model demonstrated diagnostic performance comparable to experienced physicians, with high precision in identifying metastatic nodes. The deformable self-attention module improved segmentation accuracy, with strong sensitivity and specificity.
Conclusion: This deep learning approach improves the accuracy of preoperative assessment for lateral cervical lymph node metastasis in PTC patients, aiding surgical planning, reducing misdiagnosis, and lowering medical costs. It shows promise for enhancing patient outcomes in PTC management.
{"title":"Efficient computed tomography-based image segmentation for predicting lateral cervical lymph node metastasis in papillary thyroid carcinoma.","authors":"Lei Xu, Bin Zhang, Xingyuan Li, Guona Zheng, Yong Wang, YanHui Peng","doi":"10.1117/1.JMI.13.1.014504","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.014504","url":null,"abstract":"<p><strong>Purpose: </strong>Papillary thyroid carcinoma (PTC) is a common thyroid cancer, and accurate preoperative assessment of lateral cervical lymph node metastasis is critical for surgical planning. Current methods are often subjective and prone to misdiagnosis. This study aims to improve the accuracy of metastasis evaluation using a deep learning-based segmentation method on enhanced computed tomography (CT) images.</p><p><strong>Approach: </strong>We propose a YOLOv8-based deep learning model integrated with a deformable self-attention module to enhance metastatic lymph node segmentation. The model was trained on a large dataset of pathology-confirmed CT images from PTC patients.</p><p><strong>Results: </strong>The model demonstrated diagnostic performance comparable to experienced physicians, with high precision in identifying metastatic nodes. The deformable self-attention module improved segmentation accuracy, with strong sensitivity and specificity.</p><p><strong>Conclusion: </strong>This deep learning approach improves the accuracy of preoperative assessment for lateral cervical lymph node metastasis in PTC patients, aiding surgical planning, reducing misdiagnosis, and lowering medical costs. It shows promise for enhancing patient outcomes in PTC management.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014504"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12797499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-20DOI: 10.1117/1.JMI.13.1.017502
Zong Fan, Changjie Lu, Jialin Yue, Mark Anastasio, Lulu Sun, Xiaowei Wang, Hua Li
Purpose: Deep learning (DL) models have achieved promising performance in histologic whole-slide image analysis for various clinical applications. However, their black-box nature hinders the interpretability of DL features for clinical adoption. By contrast, hand-crafted features (HCFs) directly calculated from images offer strong interpretability but with reduced predictive power of DL models. The relationship between DL features and HCFs remains insufficiently explored. We aim to enhance the interpretability and performance of DL models using a weak-to-strong generalization (WSG) framework that integrates HCFs into the learning process.
Approach: The proposed WSG framework leverages an interpretable and HCF-based "weak" teacher model that supervises a "strong" DL student model to learn and improve itself by generalizing from weaker forms of reasoning to stronger ones, for classification tasks. An adaptive bootstrap WSG loss function is designed to optimize the transfer of knowledge from hand-crafted to deep-learned features, enabling systematic analysis of feature interactions. Innovatively, mutual information (MI) between HCFs and DL features learned by student models is analyzed to gain insights into their correlations and the interpretability of DL features. The framework is evaluated using extensive experiments on three public datasets with diverse combinations of teacher and student models for tumor classification.
Results: The WSG framework achieves consistent improvements in classification performance across all evaluated models. Qualitative saliency-map analysis indicates that WSG supervision enables student models to concentrate on diagnostically relevant regions, thereby improving interpretability. Furthermore, quantitative analysis reveals a notable increase in MI between hand-crafted and deep-learned features following WSG training compared with that without WSG training, indicating more effective integration of expert knowledge into the learned representations.
Conclusions: Our study elucidates the key HCFs that drive DL model predictions in histologic image classification. The findings demonstrate that integrating HCFs into DL model training via the WSG framework can enhance both the interpretability and the model's predictive performance, supporting their broader clinical adoption.
{"title":"Enhancing deep learning interpretability for hand-crafted feature-guided histologic image classification via weak-to-strong generalization.","authors":"Zong Fan, Changjie Lu, Jialin Yue, Mark Anastasio, Lulu Sun, Xiaowei Wang, Hua Li","doi":"10.1117/1.JMI.13.1.017502","DOIUrl":"10.1117/1.JMI.13.1.017502","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning (DL) models have achieved promising performance in histologic whole-slide image analysis for various clinical applications. However, their black-box nature hinders the interpretability of DL features for clinical adoption. By contrast, hand-crafted features (HCFs) directly calculated from images offer strong interpretability but with reduced predictive power of DL models. The relationship between DL features and HCFs remains insufficiently explored. We aim to enhance the interpretability and performance of DL models using a weak-to-strong generalization (WSG) framework that integrates HCFs into the learning process.</p><p><strong>Approach: </strong>The proposed WSG framework leverages an interpretable and HCF-based \"weak\" teacher model that supervises a \"strong\" DL student model to learn and improve itself by generalizing from weaker forms of reasoning to stronger ones, for classification tasks. An adaptive bootstrap WSG loss function is designed to optimize the transfer of knowledge from hand-crafted to deep-learned features, enabling systematic analysis of feature interactions. Innovatively, mutual information (MI) between HCFs and DL features learned by student models is analyzed to gain insights into their correlations and the interpretability of DL features. The framework is evaluated using extensive experiments on three public datasets with diverse combinations of teacher and student models for tumor classification.</p><p><strong>Results: </strong>The WSG framework achieves consistent improvements in classification performance across all evaluated models. Qualitative saliency-map analysis indicates that WSG supervision enables student models to concentrate on diagnostically relevant regions, thereby improving interpretability. Furthermore, quantitative analysis reveals a notable increase in MI between hand-crafted and deep-learned features following WSG training compared with that without WSG training, indicating more effective integration of expert knowledge into the learned representations.</p><p><strong>Conclusions: </strong>Our study elucidates the key HCFs that drive DL model predictions in histologic image classification. The findings demonstrate that integrating HCFs into DL model training via the WSG framework can enhance both the interpretability and the model's predictive performance, supporting their broader clinical adoption.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"017502"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12818695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-17DOI: 10.1117/1.JMI.13.1.014006
Chongyu Qu, Ritchie Zhao, Ye Yu, Bin Liu, Tianyuan Yao, Junchao Zhu, Bennett A Landman, Yucheng Tang, Yuankai Huo
<p><strong>Purpose: </strong>Quantizing deep neural networks, reducing the precision (bit-width) of their computations, can remarkably decrease memory usage and accelerate processing, making these models more suitable for large-scale medical imaging applications with limited computational resources. However, many existing methods studied "simulated quantization," which simulates lower precision operations during inference but does not actually reduce model size or improve real-world inference speed. Moreover, the potential of deploying real three-dimensional (3D) low-bit quantization on modern graphics processing units (GPUs) is still unexplored.</p><p><strong>Approach: </strong>We introduce MedPTQ, an open-source pipeline for real post-training quantization that implements true 8-bit (INT8) inference on state-of-the-art (SOTA) 3D medical segmentation models, i.e., U-Net, SegResNet, SwinUNETR, nnU-Net, UNesT, TransUNet, ST-UNet, and VISTA3D. MedPTQ involves two main steps. First, we use TensorRT to perform simulated quantization for both weights and activations with an unlabeled calibration dataset. Second, we convert this simulated quantization into real quantization via the TensorRT engine on real GPUs, resulting in real-world reductions in model size and inference latency.</p><p><strong>Results: </strong>Extensive experiments benchmark MedPTQ across seven models and three datasets and demonstrate that it effectively performs INT8 quantization on GPUs, reducing model size by up to 3.83× and latency by up to 2.74×, while maintaining nearly identical Dice similarity coefficient (mDSC) performance to FP32 models. This advancement enables the deployment of efficient deep learning models in medical imaging applications where computational resources are constrained. The MedPTQ code and models have been released, including U-Net, TransUNet pretrained on the BTCV dataset for abdominal (13-label) segmentation, UNesT pretrained on the Whole Brain Dataset for whole brain (133-label) segmentation, and nnU-Net, SegResNet, SwinUNETR, and VISTA3D pretrained on TotalSegmentator V2 for full body (104-label) segmentation.</p><p><strong>Conclusions: </strong>We have introduced MedPTQ, a real post-training quantization pipeline that delivers INT8 inference for SOTA 3D artificial intelligence (AI) models in medical imaging segmentation. MedPTQ effectively reduces real-world model size, computational requirements, and inference latency without compromising segmentation accuracy on modern GPUs, as evidenced by mDSC comparable to full-precision baselines. We validate MedPTQ across a diverse set of AI architectures, ranging from convolutional-neural-network-based to transformer-based models, and a wide variety of medical imaging datasets. These datasets are collected from multiple hospitals with distinct imaging protocols, cover different body regions (such as the brain, abdomen, or full body), and include multiple imaging modalities [computed tomography (CT) and magne
{"title":"MedPTQ: a practical pipeline for real post-training quantization in 3D medical image segmentation.","authors":"Chongyu Qu, Ritchie Zhao, Ye Yu, Bin Liu, Tianyuan Yao, Junchao Zhu, Bennett A Landman, Yucheng Tang, Yuankai Huo","doi":"10.1117/1.JMI.13.1.014006","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.014006","url":null,"abstract":"<p><strong>Purpose: </strong>Quantizing deep neural networks, reducing the precision (bit-width) of their computations, can remarkably decrease memory usage and accelerate processing, making these models more suitable for large-scale medical imaging applications with limited computational resources. However, many existing methods studied \"simulated quantization,\" which simulates lower precision operations during inference but does not actually reduce model size or improve real-world inference speed. Moreover, the potential of deploying real three-dimensional (3D) low-bit quantization on modern graphics processing units (GPUs) is still unexplored.</p><p><strong>Approach: </strong>We introduce MedPTQ, an open-source pipeline for real post-training quantization that implements true 8-bit (INT8) inference on state-of-the-art (SOTA) 3D medical segmentation models, i.e., U-Net, SegResNet, SwinUNETR, nnU-Net, UNesT, TransUNet, ST-UNet, and VISTA3D. MedPTQ involves two main steps. First, we use TensorRT to perform simulated quantization for both weights and activations with an unlabeled calibration dataset. Second, we convert this simulated quantization into real quantization via the TensorRT engine on real GPUs, resulting in real-world reductions in model size and inference latency.</p><p><strong>Results: </strong>Extensive experiments benchmark MedPTQ across seven models and three datasets and demonstrate that it effectively performs INT8 quantization on GPUs, reducing model size by up to 3.83× and latency by up to 2.74×, while maintaining nearly identical Dice similarity coefficient (mDSC) performance to FP32 models. This advancement enables the deployment of efficient deep learning models in medical imaging applications where computational resources are constrained. The MedPTQ code and models have been released, including U-Net, TransUNet pretrained on the BTCV dataset for abdominal (13-label) segmentation, UNesT pretrained on the Whole Brain Dataset for whole brain (133-label) segmentation, and nnU-Net, SegResNet, SwinUNETR, and VISTA3D pretrained on TotalSegmentator V2 for full body (104-label) segmentation.</p><p><strong>Conclusions: </strong>We have introduced MedPTQ, a real post-training quantization pipeline that delivers INT8 inference for SOTA 3D artificial intelligence (AI) models in medical imaging segmentation. MedPTQ effectively reduces real-world model size, computational requirements, and inference latency without compromising segmentation accuracy on modern GPUs, as evidenced by mDSC comparable to full-precision baselines. We validate MedPTQ across a diverse set of AI architectures, ranging from convolutional-neural-network-based to transformer-based models, and a wide variety of medical imaging datasets. These datasets are collected from multiple hospitals with distinct imaging protocols, cover different body regions (such as the brain, abdomen, or full body), and include multiple imaging modalities [computed tomography (CT) and magne","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014006"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12912285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146221578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-08DOI: 10.1117/1.JMI.13.1.014003
Menglei Zhang, Congwei Zhang, Zhibin Quan, Bing Guo, Wankou Yang
Purpose: Skin lesion segmentation plays a significant role in the diagnosis and treatment of skin cancer. Accurate skin lesion segmentation is essential for skin cancer diagnosis and treatment but is challenged by ambiguous boundaries and diverse lesion shapes and sizes. We aim to improve segmentation performance with enhanced boundary preservation.
Approach: We propose BAF-UNet, a boundary-aware segmentation network. It integrates a multiscale boundary-aware feature fusion (BFF) module to combine low-level boundary features with high-level semantic information, and a boundary-aware vision transformer (BAViT) that incorporates boundary guidance into MobileViT to capture local and global context. A boundary-focused loss function is also introduced to prioritize edge accuracy during training. The model is evaluated on ISIC2016, ISIC2017, and PH2 datasets.
Results: Experiments demonstrate that BAF-UNet improves Dice scores and boundary accuracy compared to baseline models. The BFF and BAViT modules enhance boundary delineation while maintaining robustness across lesions of varying shapes and sizes.
Conclusions: BAF-UNet effectively integrates boundary guidance into feature fusion and transformer-based context modeling, significantly improving segmentation accuracy, particularly along lesion edges, and shows potential for clinical application in automated skin cancer diagnosis.
{"title":"BAF-UNet: a boundary-aware segmentation model for skin lesion segmentation.","authors":"Menglei Zhang, Congwei Zhang, Zhibin Quan, Bing Guo, Wankou Yang","doi":"10.1117/1.JMI.13.1.014003","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.014003","url":null,"abstract":"<p><strong>Purpose: </strong>Skin lesion segmentation plays a significant role in the diagnosis and treatment of skin cancer. Accurate skin lesion segmentation is essential for skin cancer diagnosis and treatment but is challenged by ambiguous boundaries and diverse lesion shapes and sizes. We aim to improve segmentation performance with enhanced boundary preservation.</p><p><strong>Approach: </strong>We propose BAF-UNet, a boundary-aware segmentation network. It integrates a multiscale boundary-aware feature fusion (BFF) module to combine low-level boundary features with high-level semantic information, and a boundary-aware vision transformer (BAViT) that incorporates boundary guidance into MobileViT to capture local and global context. A boundary-focused loss function is also introduced to prioritize edge accuracy during training. The model is evaluated on ISIC2016, ISIC2017, and PH2 datasets.</p><p><strong>Results: </strong>Experiments demonstrate that BAF-UNet improves Dice scores and boundary accuracy compared to baseline models. The BFF and BAViT modules enhance boundary delineation while maintaining robustness across lesions of varying shapes and sizes.</p><p><strong>Conclusions: </strong>BAF-UNet effectively integrates boundary guidance into feature fusion and transformer-based context modeling, significantly improving segmentation accuracy, particularly along lesion edges, and shows potential for clinical application in automated skin cancer diagnosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014003"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-23DOI: 10.1117/1.JMI.13.1.017001
Nathan Meulenbroek, Laura Curiel, Adam Waspe, Samuel Pichardo
Purpose: Dynamic focusing of received ultrasound signals, or beamforming, is foundational for ultrasound imaging. Conventionally, it requires arrays of ultrasound sensors to estimate where sound came from using time-of-flight (TOF) measurements. We demonstrate passive beamforming with a single biaxial sensor and accurate passive acoustic mapping with two biaxial sensors using only direction of arrival (DOA) information.
Approach: We introduce two single-element biaxial beamforming algorithms and four biaxial image reconstruction algorithms for a two-element biaxial piezoceramic transducer array. Imaging of a hemispherical acoustic source is characterized in an acoustic scanning tank within the region 29.94 mm and 50.11 mm 90.45 mm relative to the center of the array. Imaging performance is contrasted with delay, sum, and integrate (DSAI) and delay, multiply, sum, and integrate (DMSAI) algorithms.
Results: Single-element biaxial beamforming can identify DOA with a median error (± interquartile range) of and median full-width half-prominence of . Using both array elements, DOA-only images demonstrate overall median localization error of 6.41 mm (lateral: 1.02 mm, axial: 5.85 mm, signal-to-noise ratio (SNR): 15.37) and DOA + TOF images demonstrate overall median error of 6.91 mm (lateral: 1.69 mm, axial: 6.11 mm, SNR: 18.37).
Conclusions: To the best of our knowledge, we provide the first demonstration of single-element beamforming using a single stationary piezoceramic and the first demonstration of passive ultrasound imaging without the use of TOF information. These results enable simpler, smaller, more cost-effective arrays for passive ultrasound imaging.
{"title":"Ultrasound imaging using single-element biaxial beamforming.","authors":"Nathan Meulenbroek, Laura Curiel, Adam Waspe, Samuel Pichardo","doi":"10.1117/1.JMI.13.1.017001","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.017001","url":null,"abstract":"<p><strong>Purpose: </strong>Dynamic focusing of received ultrasound signals, or beamforming, is foundational for ultrasound imaging. Conventionally, it requires arrays of ultrasound sensors to estimate where sound came from using time-of-flight (TOF) measurements. We demonstrate passive beamforming with a single biaxial sensor and accurate passive acoustic mapping with two biaxial sensors using only direction of arrival (DOA) information.</p><p><strong>Approach: </strong>We introduce two single-element biaxial beamforming algorithms and four biaxial image reconstruction algorithms for a two-element biaxial piezoceramic transducer array. Imaging of a hemispherical acoustic source is characterized in an acoustic scanning tank within the region <math><mrow><mo>-</mo> <mn>30.29</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> <math><mrow><mo>≤</mo> <mi>x</mi> <mo>≤</mo></mrow> </math> 29.94 mm and 50.11 mm <math><mrow><mo>≤</mo> <mi>z</mi> <mo>≤</mo></mrow> </math> 90.45 mm relative to the center of the array. Imaging performance is contrasted with delay, sum, and integrate (DSAI) and delay, multiply, sum, and integrate (DMSAI) algorithms.</p><p><strong>Results: </strong>Single-element biaxial beamforming can identify DOA with a median error (± interquartile range) of <math><mrow><mn>0.36</mn> <mo>±</mo> <mn>0.63</mn> <mtext> </mtext> <mi>deg</mi></mrow> </math> and median full-width half-prominence of <math><mrow><mn>7.3</mn> <mo>±</mo> <mn>8.6</mn> <mtext> </mtext> <mi>deg</mi></mrow> </math> . Using both array elements, DOA-only images demonstrate overall median localization error of 6.41 mm (lateral: 1.02 mm, axial: 5.85 mm, signal-to-noise ratio (SNR): 15.37) and DOA + TOF images demonstrate overall median error of 6.91 mm (lateral: 1.69 mm, axial: 6.11 mm, SNR: 18.37).</p><p><strong>Conclusions: </strong>To the best of our knowledge, we provide the first demonstration of single-element beamforming using a single stationary piezoceramic and the first demonstration of passive ultrasound imaging without the use of TOF information. These results enable simpler, smaller, more cost-effective arrays for passive ultrasound imaging.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"017001"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12726554/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145828543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-06DOI: 10.1117/1.JMI.13.1.014002
Savannah P Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E Dewey, Jiachen Zhuo, Ellen M Mowry, Scott D Newsome, Jerry L Prince, Aaron Carass
Purpose: Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning. Although multi-inversion time (multi-TI) -weighted ( -w) magnetic resonance (MR) imaging improves visualization, it is only acquired in specific clinical settings and not available in common public MR datasets.
Approach: We present SyMTIC (synthetic multi-TI contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired -w, -weighted ( -w), and fluid-attenuated inversion recovery (FLAIR) images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time ( ) and proton density ( ) maps. These maps are then used to compute multi-TI images with arbitrary inversion times.
Results: SyMTIC was trained using paired magnetization prepared rapid acquisition with gradient echo (MPRAGE) and fast gray matter acquisition T1 inversion recovery (FGATIR) images along with -w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data. The synthetic images, especially for TI values between 400 to 800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei.
Conclusion: SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. When paired with the HACA3 algorithm, it generalizes well to varied clinical datasets, including those without FLAIR or -w images and unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.
{"title":"Synthetic multi-inversion time magnetic resonance images for visualization of subcortical structures.","authors":"Savannah P Hays, Lianrui Zuo, Anqi Feng, Yihao Liu, Blake E Dewey, Jiachen Zhuo, Ellen M Mowry, Scott D Newsome, Jerry L Prince, Aaron Carass","doi":"10.1117/1.JMI.13.1.014002","DOIUrl":"10.1117/1.JMI.13.1.014002","url":null,"abstract":"<p><strong>Purpose: </strong>Visualization of subcortical gray matter is essential in neuroscience and clinical practice, particularly for disease understanding and surgical planning. Although multi-inversion time (multi-TI) <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -weighted ( <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -w) magnetic resonance (MR) imaging improves visualization, it is only acquired in specific clinical settings and not available in common public MR datasets.</p><p><strong>Approach: </strong>We present SyMTIC (synthetic multi-TI contrasts), a deep learning method that generates synthetic multi-TI images using routinely acquired <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -w, <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -weighted ( <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w), and fluid-attenuated inversion recovery (FLAIR) images. Our approach combines image translation via deep neural networks with imaging physics to estimate longitudinal relaxation time ( <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> ) and proton density ( <math><mrow><mi>ρ</mi></mrow> </math> ) maps. These maps are then used to compute multi-TI images with arbitrary inversion times.</p><p><strong>Results: </strong>SyMTIC was trained using paired magnetization prepared rapid acquisition with gradient echo (MPRAGE) and fast gray matter acquisition T1 inversion recovery (FGATIR) images along with <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w and FLAIR images. It accurately synthesized multi-TI images from standard clinical inputs, achieving image quality comparable to that from explicitly acquired multi-TI data. The synthetic images, especially for TI values between 400 to 800 ms, enhanced visualization of subcortical structures and improved segmentation of thalamic nuclei.</p><p><strong>Conclusion: </strong>SyMTIC enables robust generation of high-quality multi-TI images from routine MR contrasts. When paired with the HACA3 algorithm, it generalizes well to varied clinical datasets, including those without FLAIR or <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w images and unknown parameters, offering a practical solution for improving brain MR image visualization and analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"014002"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}