Pub Date : 2026-02-11DOI: 10.1007/s10278-026-01870-x
Mohamed Sobhi Jabal, Miriam Chisholm, Vikash Gupta, Barbaros Selnur Erdal, David Kallmes, Waleed Brinjikji, Mustafa Bashir, Evan Calabrese, Kirti Magudia
Artificial intelligence research has profound implications for the future of radiology, making it essential to understand funding patterns and diffusion rate from the National Institutes of Health (NIH), historically the leading source of biomedical research funding in the United States. Recent changes in federal funding further necessitate understanding the trends and focus areas for future comparison and strategic decisions by researchers, institutions, and policymakers adapting to the evolving funding landscape. This retrospective study searched and analyzed active NIH-funded projects as of January 2025 and temporally over the last decade (2015-2024) using the NIH RePORTER and ExPORTER databases. An automated large language model pipeline was employed for thematic extraction and categorization of active projects. Diffusion rate analyses were performed to examine the progression of funding distribution across institutes. Descriptive statistics were provided for grant types, administering institutes, principal investigator details, organizations, geography, and research topics. Among active grants focused on AI in radiology, the National Cancer Institute led in total projects (188; $117.0 M), while the National Heart, Lung, and Blood Institute had the greatest funding ($167.3 M). The most common grant type for AI in radiology was R01 (547 projects; $326.1 M), followed by R21 (85 projects; $24.2 M) and U01 (51 projects; $65.6 M). Funding was concentrated in major academic institutions. Over the years, annual funding grew approximately 13.7-fold from $46.4 M (FY2015) to $633.5 M (FY2024), and integration of AI projects into radiology research increased approximately eightfold (from 3.9% to 30.4%). AI diffusion demonstrated exponential growth (Compound Annual Growth Rate, CAGR 25.5%, R2 = 0.97) with a doubling time of 2.94 years. At 30.4% penetration, the field has entered the Early Majority phase, approaching a critical 50% inflection point projected for 2028-2030. Topic analysis of awarded NIH grants on AI in radiology revealed high frequency of deep learning applications, magnetic resonance imaging, and neurological applications followed by oncology.
人工智能研究对放射学的未来有着深远的影响,因此了解美国国立卫生研究院(NIH)的资助模式和扩散率至关重要,NIH历来是美国生物医学研究资金的主要来源。最近联邦资助的变化进一步需要了解趋势和重点领域,以便研究人员、机构和政策制定者适应不断变化的资助格局,进行未来的比较和战略决策。本回顾性研究使用NIH RePORTER和exporters数据库检索并分析了截至2025年1月和过去十年(2015-2024年)NIH资助的活跃项目。采用自动化的大型语言模型管道对活动项目进行主题提取和分类。扩散率分析被用来检验各研究所之间资金分配的进展。描述性统计提供了资助类型、管理机构、主要研究者细节、组织、地理和研究课题。在专注于放射学人工智能的积极拨款中,美国国家癌症研究所(National Cancer Institute)的项目总数最多(188个;1.17亿美元),而美国国家心肺血液研究所(National Heart, Lung and Blood Institute)的资金最多(1.673亿美元)。放射学人工智能最常见的拨款类型是R01(547个项目;3.261亿美元),其次是R21(85个项目;2420万美元)和U01(51个项目;6560万美元)。资金集中在主要的学术机构。多年来,年度资金增长了约13.7倍,从4640万美元(2015财年)增长到6.335亿美元(2024财年),人工智能项目与放射学研究的整合增长了约8倍(从3.9%增长到30.4%)。人工智能扩散呈指数增长(复合年增长率25.5%,R2 = 0.97),翻倍时间为2.94年。渗透率为30.4%,该领域已进入早期多数阶段,预计将在2028-2030年接近50%的关键拐点。对美国国立卫生研究院放射学人工智能拨款的主题分析显示,深度学习应用、磁共振成像和神经学应用的频率很高,其次是肿瘤学。
{"title":"State and Diffusion of National Institutes of Health Funding of AI in Radiology.","authors":"Mohamed Sobhi Jabal, Miriam Chisholm, Vikash Gupta, Barbaros Selnur Erdal, David Kallmes, Waleed Brinjikji, Mustafa Bashir, Evan Calabrese, Kirti Magudia","doi":"10.1007/s10278-026-01870-x","DOIUrl":"https://doi.org/10.1007/s10278-026-01870-x","url":null,"abstract":"<p><p>Artificial intelligence research has profound implications for the future of radiology, making it essential to understand funding patterns and diffusion rate from the National Institutes of Health (NIH), historically the leading source of biomedical research funding in the United States. Recent changes in federal funding further necessitate understanding the trends and focus areas for future comparison and strategic decisions by researchers, institutions, and policymakers adapting to the evolving funding landscape. This retrospective study searched and analyzed active NIH-funded projects as of January 2025 and temporally over the last decade (2015-2024) using the NIH RePORTER and ExPORTER databases. An automated large language model pipeline was employed for thematic extraction and categorization of active projects. Diffusion rate analyses were performed to examine the progression of funding distribution across institutes. Descriptive statistics were provided for grant types, administering institutes, principal investigator details, organizations, geography, and research topics. Among active grants focused on AI in radiology, the National Cancer Institute led in total projects (188; $117.0 M), while the National Heart, Lung, and Blood Institute had the greatest funding ($167.3 M). The most common grant type for AI in radiology was R01 (547 projects; $326.1 M), followed by R21 (85 projects; $24.2 M) and U01 (51 projects; $65.6 M). Funding was concentrated in major academic institutions. Over the years, annual funding grew approximately 13.7-fold from $46.4 M (FY2015) to $633.5 M (FY2024), and integration of AI projects into radiology research increased approximately eightfold (from 3.9% to 30.4%). AI diffusion demonstrated exponential growth (Compound Annual Growth Rate, CAGR 25.5%, R<sup>2</sup> = 0.97) with a doubling time of 2.94 years. At 30.4% penetration, the field has entered the Early Majority phase, approaching a critical 50% inflection point projected for 2028-2030. Topic analysis of awarded NIH grants on AI in radiology revealed high frequency of deep learning applications, magnetic resonance imaging, and neurological applications followed by oncology.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-11DOI: 10.1007/s10278-026-01859-6
Jong Chan Yeom, Jin Youp Kim, Young Jae Kim, Kwang Gi Kim, Chae-Seo Rhee
{"title":"Correction: Comparative Performance Evaluation of Federated and Centralized Learning for Velum and OTE Segmentation in Sleep Endoscopy Images.","authors":"Jong Chan Yeom, Jin Youp Kim, Young Jae Kim, Kwang Gi Kim, Chae-Seo Rhee","doi":"10.1007/s10278-026-01859-6","DOIUrl":"https://doi.org/10.1007/s10278-026-01859-6","url":null,"abstract":"","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146159793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1007/s10278-026-01855-w
Yan Wang, Yiran Pan, Wulin Wen, Peng Yang, Jibo Wang, Fa Yang, Jingming Dong, Linjing Zhang, Xiaoying Pan
Early and accurate diagnosis of nasopharyngeal-laryngeal tumors is critical for improving patient prognosis. Deep learning methods have achieved significant progress in the automatic detection of lesions in static endoscopic images. However, during nasopharyngeal-laryngeal endoscopy, the quality of endoscopic videos often suffers from motion blur, uneven exposure, and reflective artifacts, which adversely affect the performance of existing static image detectors. Therefore, we propose a novel two-stage video lesion detection network, DynSTPN, to address the challenge of lesion detection in complex scenarios. First, in the prompt generation network stage, we design a dynamic prompt generator that generates discriminative prompt based on spatio-temporal feature representations of reference frames to mitigate quality degradation in inference frames. Second, at the object detection network stage, we introduce an adaptive differentiable gating mechanism to integrate reference frames' prompt information, dynamically adjusting the enhancement effect of reference frames on the inference frame. Experiments were conducted on two datasets: the self-constructed four-category nasopharyngeal-laryngeal lesion video object detection (NLLVOD) and the publicly available ImageNet VID dataset. Compared to state-of-the-art (SOTA) methods, DynSTPN achieved the best balance between detection accuracy and efficiency on the VID dataset. On the NLLVOD dataset, DynSTPN achieved a superior detection accuracy of 79.6% and speed of 29.4 FPS, meeting the real-time requirements for clinical applications. These results significantly outperform SOTA static image detector, YOLOv12-M. Experimental results demonstrate that DynSTPN effectively leverages information from video reference frames to enhance detection performance, achieving superior accuracy compared to SOTA image/video methods, thereby offering enhanced clinical applicability.
{"title":"Network for Real-time Laryngeal Lesions Video Object Detection.","authors":"Yan Wang, Yiran Pan, Wulin Wen, Peng Yang, Jibo Wang, Fa Yang, Jingming Dong, Linjing Zhang, Xiaoying Pan","doi":"10.1007/s10278-026-01855-w","DOIUrl":"https://doi.org/10.1007/s10278-026-01855-w","url":null,"abstract":"<p><p>Early and accurate diagnosis of nasopharyngeal-laryngeal tumors is critical for improving patient prognosis. Deep learning methods have achieved significant progress in the automatic detection of lesions in static endoscopic images. However, during nasopharyngeal-laryngeal endoscopy, the quality of endoscopic videos often suffers from motion blur, uneven exposure, and reflective artifacts, which adversely affect the performance of existing static image detectors. Therefore, we propose a novel two-stage video lesion detection network, DynSTPN, to address the challenge of lesion detection in complex scenarios. First, in the prompt generation network stage, we design a dynamic prompt generator that generates discriminative prompt based on spatio-temporal feature representations of reference frames to mitigate quality degradation in inference frames. Second, at the object detection network stage, we introduce an adaptive differentiable gating mechanism to integrate reference frames' prompt information, dynamically adjusting the enhancement effect of reference frames on the inference frame. Experiments were conducted on two datasets: the self-constructed four-category nasopharyngeal-laryngeal lesion video object detection (NLLVOD) and the publicly available ImageNet VID dataset. Compared to state-of-the-art (SOTA) methods, DynSTPN achieved the best balance between detection accuracy and efficiency on the VID dataset. On the NLLVOD dataset, DynSTPN achieved a superior detection accuracy of 79.6% and speed of 29.4 FPS, meeting the real-time requirements for clinical applications. These results significantly outperform SOTA static image detector, YOLOv12-M. Experimental results demonstrate that DynSTPN effectively leverages information from video reference frames to enhance detection performance, achieving superior accuracy compared to SOTA image/video methods, thereby offering enhanced clinical applicability.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1007/s10278-026-01861-y
Yan Pan, Hao Xu, Mengge Ye, Wei Zhang, Feng Wang, Yu Zhang, Xuefei Deng
This study aimed to develop a multi-task nomogram model based on CT imaging to accurately predict rebleeding and survival prognosis in patients with cerebral contusion, thereby enhancing early risk assessment and personalized treatment strategies. A retrospective cohort of 427 patients with CT-confirmed cerebral contusions was analyzed. The study integrated clinical data (e.g., Glasgow Coma Scale scores), radiomics features extracted from CT images, and deep transfer learning (DTL) outputs using a DenseNet121 architecture. A multi-task nomogram model was constructed to concurrently predict rebleeding risk and survival outcomes. Model performance was evaluated using ROC curves, calibration plots, decision curve analysis (DCA), and Harrell's concordance index (C-index). Interpretability was examined using Gradient-weighted Class Activation Mapping (Grad-CAM). The nomogram model exhibited excellent predictive performance, achieving areas under the curve (AUC) values of 0.973 for the training cohort and 0.959 for the testing cohort regarding rebleeding prediction. It also showed the highest prognostic accuracy (C-index 0.857, p < 0.0001). The model highlighted the critical role of GCS scores, particularly in moderate TBI cases (GCS 6-8), where timely intervention improved survival rates by 20%. Grad-CAM visualization confirmed the model's ability to localize hemorrhage regions accurately. A multi-task nomogram model, combining clinical, radiomic, and DTL features, provides a robust tool for early prediction of rebleeding and survival in cerebral contusion patients. Its integration with GCS scores facilitates targeted interventions, especially for moderate cerebral contusion cases, underscoring its clinical utility in improving outcomes.
本研究旨在建立基于CT影像的多任务图模型,准确预测脑挫伤患者的再出血和生存预后,从而加强早期风险评估和个性化治疗策略。回顾性分析了427例ct确诊的脑挫伤患者。该研究整合了临床数据(例如,格拉斯哥昏迷量表评分),从CT图像中提取的放射组学特征,以及使用DenseNet121架构的深度迁移学习(DTL)输出。构建多任务nomogram模型,同时预测再出血风险和生存结果。采用ROC曲线、校正图、决策曲线分析(DCA)和Harrell’s concordance index (C-index)评价模型的性能。使用梯度加权类激活映射(Grad-CAM)检查可解释性。nomogram模型对再出血的预测效果非常好,训练组和测试组的曲线下面积(AUC)分别为0.973和0.959。它也显示出最高的预后准确性(C-index 0.857, p
{"title":"Multi-task Nomogram Model for Predicting Rebleeding and Survival Outcomes in Cerebral Contusion.","authors":"Yan Pan, Hao Xu, Mengge Ye, Wei Zhang, Feng Wang, Yu Zhang, Xuefei Deng","doi":"10.1007/s10278-026-01861-y","DOIUrl":"https://doi.org/10.1007/s10278-026-01861-y","url":null,"abstract":"<p><p>This study aimed to develop a multi-task nomogram model based on CT imaging to accurately predict rebleeding and survival prognosis in patients with cerebral contusion, thereby enhancing early risk assessment and personalized treatment strategies. A retrospective cohort of 427 patients with CT-confirmed cerebral contusions was analyzed. The study integrated clinical data (e.g., Glasgow Coma Scale scores), radiomics features extracted from CT images, and deep transfer learning (DTL) outputs using a DenseNet121 architecture. A multi-task nomogram model was constructed to concurrently predict rebleeding risk and survival outcomes. Model performance was evaluated using ROC curves, calibration plots, decision curve analysis (DCA), and Harrell's concordance index (C-index). Interpretability was examined using Gradient-weighted Class Activation Mapping (Grad-CAM). The nomogram model exhibited excellent predictive performance, achieving areas under the curve (AUC) values of 0.973 for the training cohort and 0.959 for the testing cohort regarding rebleeding prediction. It also showed the highest prognostic accuracy (C-index 0.857, p < 0.0001). The model highlighted the critical role of GCS scores, particularly in moderate TBI cases (GCS 6-8), where timely intervention improved survival rates by 20%. Grad-CAM visualization confirmed the model's ability to localize hemorrhage regions accurately. A multi-task nomogram model, combining clinical, radiomic, and DTL features, provides a robust tool for early prediction of rebleeding and survival in cerebral contusion patients. Its integration with GCS scores facilitates targeted interventions, especially for moderate cerebral contusion cases, underscoring its clinical utility in improving outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1007/s10278-026-01853-y
Winston T Chu, William Alexander Holland, Maria Krantz, Fatemeh Homayounieh, Shiva Singh, Phillip J Sayre, Joseph Laux, Edmond Adib, Mark Rustad, Jens H Kuhn, Venkatesh Mani, Claudia Calcagno, Gabriella Worwa, Ian Crozier, Jeffrey Solomon
Annotations of 3D medical images for segmentation require specialized expertise and are time-consuming, making large, labeled datasets rare and challenging to produce. Our objective was to investigate whether large unlabeled human datasets can be leveraged using cross-species self-supervised transfer learning to enhance the segmentation of pulmonary lobes in computed tomography (CT) scans from nonhuman primates with and without lower respiratory infection. A total of 1667 unlabeled human chest CT scans were assembled from two publicly available sources, and 23 chest CT scans of crab-eating macaques were annotated for the locations of the pulmonary lobes. The unlabeled human scans were used to train a 3D vision transformer (ViT) autoencoder in a self-supervised manner using contrastive learning. The pretrained ViT encoder was transferred to a U-Net transformers (UNETR) segmentation model, which was then trained using the labeled macaque dataset to perform pulmonary lobe segmentation. Ablation experiments on the effects of self-supervised pretraining, layer freezing, and data augmentation were conducted. The segmentation model with cross-species self-supervised pretraining achieved high performance (Dice similarity coefficient (DSC) = 90.31 ± 1.77) that was significantly greater than without pretraining (ΔDSC = 1.2%, tpaired = 5.3, p = 1.8E-3). Ablation experiments on the human pretraining data demonstrated that the amount of data and diversity of sources were important to performance. In fine-tuning, freezing just the first three layers of the ViT produced the best-performing model, and data augmentation before self-supervised pretraining and supervised fine-tuning was critical for high performance. Cross-species self-supervised transfer learning significantly improved the macaque pulmonary lobe segmentation performance with no additional acquisition or annotation costs.
{"title":"Cross-Species Self-supervised Transfer Learning for Pulmonary Lobe Segmentation in Nonhuman Primates.","authors":"Winston T Chu, William Alexander Holland, Maria Krantz, Fatemeh Homayounieh, Shiva Singh, Phillip J Sayre, Joseph Laux, Edmond Adib, Mark Rustad, Jens H Kuhn, Venkatesh Mani, Claudia Calcagno, Gabriella Worwa, Ian Crozier, Jeffrey Solomon","doi":"10.1007/s10278-026-01853-y","DOIUrl":"https://doi.org/10.1007/s10278-026-01853-y","url":null,"abstract":"<p><p>Annotations of 3D medical images for segmentation require specialized expertise and are time-consuming, making large, labeled datasets rare and challenging to produce. Our objective was to investigate whether large unlabeled human datasets can be leveraged using cross-species self-supervised transfer learning to enhance the segmentation of pulmonary lobes in computed tomography (CT) scans from nonhuman primates with and without lower respiratory infection. A total of 1667 unlabeled human chest CT scans were assembled from two publicly available sources, and 23 chest CT scans of crab-eating macaques were annotated for the locations of the pulmonary lobes. The unlabeled human scans were used to train a 3D vision transformer (ViT) autoencoder in a self-supervised manner using contrastive learning. The pretrained ViT encoder was transferred to a U-Net transformers (UNETR) segmentation model, which was then trained using the labeled macaque dataset to perform pulmonary lobe segmentation. Ablation experiments on the effects of self-supervised pretraining, layer freezing, and data augmentation were conducted. The segmentation model with cross-species self-supervised pretraining achieved high performance (Dice similarity coefficient (DSC) = 90.31 ± 1.77) that was significantly greater than without pretraining (ΔDSC = 1.2%, t<sub>paired</sub> = 5.3, p = 1.8E-3). Ablation experiments on the human pretraining data demonstrated that the amount of data and diversity of sources were important to performance. In fine-tuning, freezing just the first three layers of the ViT produced the best-performing model, and data augmentation before self-supervised pretraining and supervised fine-tuning was critical for high performance. Cross-species self-supervised transfer learning significantly improved the macaque pulmonary lobe segmentation performance with no additional acquisition or annotation costs.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1007/s10278-026-01849-8
Mona Avanaki, Renata R Almeida, Caitlin M Dugdale, Tarik K Alkasab
Clinical decision support (CDS) systems have increasingly optimized care pathways. Integrating imaging findings into CDS remains a challenge due to unstructured radiology outputs. To evaluate the role of imaging-based decision support within an integrated clinicoradiological pathway for determining emergency department (ED) disposition using modality-based assessment. A computer-assisted reporting/decision support (CAR/DS) tool was developed to standardize chest radiograph and CT interpretations and integrated with a CDS system. 9,036 adult patients with suspected viral pneumonia presenting to the ED (07/2020 -12/2021) were analyzed as a use case. Associations between CAR/DS outputs and ED disposition were assessed using correlation and logistic regression. Agreement between radiograph- and CT-derived outputs was also examined. Most exams were negative [70.9% (6,408/9,036)] and 3.1% typical (276/9,036). Higher CAR/DS likelihood was independently associated with increased odds of admission [all p < 0.001; Groups: radiograph only (OR = 1.86), CT only (OR = 1.83), or both (CT OR = 1.97, radiograph OR = 1.71)]. Radiograph and CT outputs agreement was moderate (Spearman's ρ = 0.43, p < 0.001). Most negative radiographs were followed by a negative CT (60.7%, 389/642), only 23.8% (24/101) of typical radiographs had a subsequent typical CT; most typical radiographs were followed by an indeterminate (37.6%, 38/101) or atypical (28.7%, 29/101) CT. CAR/DS outputs integrated into CDS systems provide actionable information that independently predicts ED disposition. CT added value by excluding suspected pneumonia on radiographs. This exemplifies how imaging data can be standardized and seamlessly incorporated into broader decision pathways, with potential applicability well beyond pandemic-related use cases.
{"title":"Integrating Radiology & Clinical Decision Support to Drive Management of Emergency Department Patients in a Large Academic Medical Center.","authors":"Mona Avanaki, Renata R Almeida, Caitlin M Dugdale, Tarik K Alkasab","doi":"10.1007/s10278-026-01849-8","DOIUrl":"https://doi.org/10.1007/s10278-026-01849-8","url":null,"abstract":"<p><p>Clinical decision support (CDS) systems have increasingly optimized care pathways. Integrating imaging findings into CDS remains a challenge due to unstructured radiology outputs. To evaluate the role of imaging-based decision support within an integrated clinicoradiological pathway for determining emergency department (ED) disposition using modality-based assessment. A computer-assisted reporting/decision support (CAR/DS) tool was developed to standardize chest radiograph and CT interpretations and integrated with a CDS system. 9,036 adult patients with suspected viral pneumonia presenting to the ED (07/2020 -12/2021) were analyzed as a use case. Associations between CAR/DS outputs and ED disposition were assessed using correlation and logistic regression. Agreement between radiograph- and CT-derived outputs was also examined. Most exams were negative [70.9% (6,408/9,036)] and 3.1% typical (276/9,036). Higher CAR/DS likelihood was independently associated with increased odds of admission [all p < 0.001; Groups: radiograph only (OR = 1.86), CT only (OR = 1.83), or both (CT OR = 1.97, radiograph OR = 1.71)]. Radiograph and CT outputs agreement was moderate (Spearman's ρ = 0.43, p < 0.001). Most negative radiographs were followed by a negative CT (60.7%, 389/642), only 23.8% (24/101) of typical radiographs had a subsequent typical CT; most typical radiographs were followed by an indeterminate (37.6%, 38/101) or atypical (28.7%, 29/101) CT. CAR/DS outputs integrated into CDS systems provide actionable information that independently predicts ED disposition. CT added value by excluding suspected pneumonia on radiographs. This exemplifies how imaging data can be standardized and seamlessly incorporated into broader decision pathways, with potential applicability well beyond pandemic-related use cases.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146128247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1007/s10278-026-01848-9
Chenzi Wang, Juan Long, Dapeng Zhang, Lulu Fan, Zhen Wang, Xiaohan Liu, He Zhang, Chong Wang, Yang Wu, Aiyun Sun, Kai Xu, Yankai Meng
Carotid CT angiography (CTA) is valuable for diagnosing carotid artery disease but involves radiation and contrast agent risks. Deep Learning Image Reconstruction (DLIR-H) shows potential for maintaining image quality in low-dose protocols. In this prospective study, 180 patients undergoing dual-energy CTA were divided into three groups: a control group (ASIR-V 50%, NI = 4, contrast = 0.5 mL/kg), a low-dose group (DLIR-H, NI = 11, contrast = 0.5 mL/kg), and an ultra-low-dose group (DLIR-H, NI = 13, contrast = 0.4 mL/kg). Objective (CTV[CT values], noise, SNR, CNR) and subjective (5-point Likert scale) image quality were evaluated. The ultra-low-dose group achieved a 20.3% reduction in contrast volume and a 53.3% reduction in effective dose compared to the control group (P < 0.001). Both experimental groups showed lower noise and higher CNR/SNR (except at aortic arch) than controls. However, the ultra-low-dose group had significantly lower CNR/SNR than the low-dose group (P < 0.05). Subjective image quality was superior in both experimental groups (P < 0.001), with high inter-rater agreement. DLIR-H outperformed ASIR-V in low and ultra-low-dose protocols but could not fully compensate for image quality degradation when radiation and contrast were further reduced.
颈动脉CT血管造影(CTA)对诊断颈动脉疾病很有价值,但涉及辐射和造影剂风险。深度学习图像重建(DLIR-H)显示了在低剂量协议下保持图像质量的潜力。本前瞻性研究将180例接受双能CTA治疗的患者分为3组:对照组(ASIR-V 50%, NI = 4,反差= 0.5 mL/kg)、低剂量组(DLIR-H, NI = 11,反差= 0.5 mL/kg)和超低剂量组(DLIR-H, NI = 13,反差= 0.4 mL/kg)。评价客观图像质量(CTV[CT值]、噪声、信噪比、CNR)和主观图像质量(5点李克特量表)。与对照组相比,超低剂量组造影剂体积减少20.3%,有效剂量减少53.3% (P
{"title":"Comparison of Image Quality Reconstructed Using Iterative Reconstruction and Deep Learning Algorithms Under Varying Dose Reductions in Dual-Energy Carotid CT Angiography.","authors":"Chenzi Wang, Juan Long, Dapeng Zhang, Lulu Fan, Zhen Wang, Xiaohan Liu, He Zhang, Chong Wang, Yang Wu, Aiyun Sun, Kai Xu, Yankai Meng","doi":"10.1007/s10278-026-01848-9","DOIUrl":"https://doi.org/10.1007/s10278-026-01848-9","url":null,"abstract":"<p><p>Carotid CT angiography (CTA) is valuable for diagnosing carotid artery disease but involves radiation and contrast agent risks. Deep Learning Image Reconstruction (DLIR-H) shows potential for maintaining image quality in low-dose protocols. In this prospective study, 180 patients undergoing dual-energy CTA were divided into three groups: a control group (ASIR-V 50%, NI = 4, contrast = 0.5 mL/kg), a low-dose group (DLIR-H, NI = 11, contrast = 0.5 mL/kg), and an ultra-low-dose group (DLIR-H, NI = 13, contrast = 0.4 mL/kg). Objective (CTV[CT values], noise, SNR, CNR) and subjective (5-point Likert scale) image quality were evaluated. The ultra-low-dose group achieved a 20.3% reduction in contrast volume and a 53.3% reduction in effective dose compared to the control group (P < 0.001). Both experimental groups showed lower noise and higher CNR/SNR (except at aortic arch) than controls. However, the ultra-low-dose group had significantly lower CNR/SNR than the low-dose group (P < 0.05). Subjective image quality was superior in both experimental groups (P < 0.001), with high inter-rater agreement. DLIR-H outperformed ASIR-V in low and ultra-low-dose protocols but could not fully compensate for image quality degradation when radiation and contrast were further reduced.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1007/s10278-026-01845-y
Soolmaz Abbasi, Hisham Al-Kassem, Hamdy El-Hakim, Jacob Jaremko, Abhilash Hareendranathan
Pediatric swallowing dysfunction (SwD) poses serious health risks, including aspiration, malnutrition, and recurrent respiratory infections, making early and accurate diagnosis essential for preventing long-term sequelae such as chronic lung disease and growth failure. Fiberoptic endoscopic evaluation of swallowing (FEES) is widely used for direct visualization of the swallowing mechanism in children, offering advantages over fluoroscopy such as bedside accessibility and radiation-free imaging. During FEES, patients swallow green-dyed liquid with an endoscope positioned in the throat. Interpreting FEES recordings is a subjective, time-consuming process that requires specialized expertise. Automated, objective analysis tools would be useful to support clinical decision-making. In this study, we propose a hybrid framework for classifying pediatric FEES recordings as normal or abnormal. The approach combines a rule-based analysis which detects the green-tinted swallowed liquid, with a transformer-based deep learning model. Frames are first filtered using a Siamese network to exclude irrelevant or low-quality frames, followed by quantification of the green frame ratio based on frames containing green patches. A confidence-guided decision strategy classifies clear-cut cases via thresholding, while delegating uncertain cases to the deep learning model for further evaluation. Evaluation on 142 pediatric FEES videos (45 normal and 97 with abnormalities) showed that the hybrid approach outperformed both the deep learning and rule-based methods individually, achieving 89.4% accuracy, 96.6% precision, and 93.3% specificity for aspiration. Our results indicate that by combining rule-based and deep learning strategies, we could reliably detect swallowing abnormalities from pediatric FEES videos with accuracies comparable to experts.
{"title":"Detection of Swallowing Abnormalities in Pediatric FEES Recordings Using Rule-Based and Model-Based Methods.","authors":"Soolmaz Abbasi, Hisham Al-Kassem, Hamdy El-Hakim, Jacob Jaremko, Abhilash Hareendranathan","doi":"10.1007/s10278-026-01845-y","DOIUrl":"https://doi.org/10.1007/s10278-026-01845-y","url":null,"abstract":"<p><p>Pediatric swallowing dysfunction (SwD) poses serious health risks, including aspiration, malnutrition, and recurrent respiratory infections, making early and accurate diagnosis essential for preventing long-term sequelae such as chronic lung disease and growth failure. Fiberoptic endoscopic evaluation of swallowing (FEES) is widely used for direct visualization of the swallowing mechanism in children, offering advantages over fluoroscopy such as bedside accessibility and radiation-free imaging. During FEES, patients swallow green-dyed liquid with an endoscope positioned in the throat. Interpreting FEES recordings is a subjective, time-consuming process that requires specialized expertise. Automated, objective analysis tools would be useful to support clinical decision-making. In this study, we propose a hybrid framework for classifying pediatric FEES recordings as normal or abnormal. The approach combines a rule-based analysis which detects the green-tinted swallowed liquid, with a transformer-based deep learning model. Frames are first filtered using a Siamese network to exclude irrelevant or low-quality frames, followed by quantification of the green frame ratio based on frames containing green patches. A confidence-guided decision strategy classifies clear-cut cases via thresholding, while delegating uncertain cases to the deep learning model for further evaluation. Evaluation on 142 pediatric FEES videos (45 normal and 97 with abnormalities) showed that the hybrid approach outperformed both the deep learning and rule-based methods individually, achieving 89.4% accuracy, 96.6% precision, and 93.3% specificity for aspiration. Our results indicate that by combining rule-based and deep learning strategies, we could reliably detect swallowing abnormalities from pediatric FEES videos with accuracies comparable to experts.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1007/s10278-026-01857-8
Hadiseh Kavandi, Kyle Costenbader, Sandrine Yazbek, Peter Kamel, Noushin Yahyavi-Firouz-Abadi, Jean Jeudy
This study evaluates a commercially available AI tool (Aidoc) for intracranial hemorrhage (ICH) detection-originally trained on adults-in pediatric patients, addressing the critical need for timely diagnosis and current research gaps in pediatric AI applications. This single-center, retrospective study included pediatric patients aged 6-17 who underwent head CT between January 2017 and November 2022. Radiological reports (unaided by AI) and CT images were analyzed by natural language processing (NLP) and image-based algorithms, respectively, to classify ICH presence or absence. Ground truth was assumed for concordant cases. Three radiologists independently reviewed discrepant cases using majority vote. Among 2502 pediatric patients undergoing head CT, the AI algorithm flagged 292 cases as suspected ICH-positive. A total of 174 discordant cases between NLP and AI were independently reviewed to create the reference standard. Results showed 144 true positives, 6 false negatives, 148 false positives, and 2204 true negatives, yielding sensitivity of 96.0% (91.5-98.5%) and specificity of 93.7% (92.6-94.7%). Overall algorithm accuracy was 93.8% (92.8-94.8%). The most frequent false positives were choroid plexus calcifications and hyperdense venous sinuses, while subdural hemorrhages accounted for most false negatives. This deep learning AI algorithm trained on adult data performs well in detecting pediatric ICH, with 96.0% sensitivity and 93.7% specificity. However, common false positives, choroid plexus calcifications and hyperdense venous sinuses, reflect pediatric-specific features, while missed subdural hemorrhages mirror known adult limitations. Results highlight the need for pediatric-focused AI training to improve diagnostic accuracy in this underserved population.
{"title":"Performance Evaluation of a Commercial Deep Learning Software for Detecting Intracranial Hemorrhage in a Pediatric Population.","authors":"Hadiseh Kavandi, Kyle Costenbader, Sandrine Yazbek, Peter Kamel, Noushin Yahyavi-Firouz-Abadi, Jean Jeudy","doi":"10.1007/s10278-026-01857-8","DOIUrl":"https://doi.org/10.1007/s10278-026-01857-8","url":null,"abstract":"<p><p>This study evaluates a commercially available AI tool (Aidoc) for intracranial hemorrhage (ICH) detection-originally trained on adults-in pediatric patients, addressing the critical need for timely diagnosis and current research gaps in pediatric AI applications. This single-center, retrospective study included pediatric patients aged 6-17 who underwent head CT between January 2017 and November 2022. Radiological reports (unaided by AI) and CT images were analyzed by natural language processing (NLP) and image-based algorithms, respectively, to classify ICH presence or absence. Ground truth was assumed for concordant cases. Three radiologists independently reviewed discrepant cases using majority vote. Among 2502 pediatric patients undergoing head CT, the AI algorithm flagged 292 cases as suspected ICH-positive. A total of 174 discordant cases between NLP and AI were independently reviewed to create the reference standard. Results showed 144 true positives, 6 false negatives, 148 false positives, and 2204 true negatives, yielding sensitivity of 96.0% (91.5-98.5%) and specificity of 93.7% (92.6-94.7%). Overall algorithm accuracy was 93.8% (92.8-94.8%). The most frequent false positives were choroid plexus calcifications and hyperdense venous sinuses, while subdural hemorrhages accounted for most false negatives. This deep learning AI algorithm trained on adult data performs well in detecting pediatric ICH, with 96.0% sensitivity and 93.7% specificity. However, common false positives, choroid plexus calcifications and hyperdense venous sinuses, reflect pediatric-specific features, while missed subdural hemorrhages mirror known adult limitations. Results highlight the need for pediatric-focused AI training to improve diagnostic accuracy in this underserved population.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1007/s10278-025-01839-2
Sara Salehi, Varekan Keishing, Yashbir Singh, David Wei, Amirali Khosravi, Parnian Habibi, Jaidip Jagtap, Bradley J Erickson
Agentic artificial intelligence systems featuring iterative reasoning, autonomous tool use, or multi-agent collaboration have been proposed as solutions to the limitations of large language models (LLMs) in neuroradiology. However, the extent of their implementation and clinical validation remains unclear. We systematically searched PubMed, Web of Science, and Scopus (January 2022-August 2025) for studies implementing agentic AI in neuroradiology. Six independent reviewers (three medical doctors and three AI specialists) assessed full texts. Agentic AI was defined as requiring mandatory iterative reasoning plus either autonomous tool use or multi-agent collaboration. Study quality was evaluated using adapted QUADAS-AI criteria. From 230 records, 9 studies (3.90%) met inclusion criteria. Of these, five (55.60%) implemented true multi-agent architecture, two (22.20%) used hybrid or conceptual frameworks, and two (22.20%) relied on single-model LLMs without genuine agentic behavior. All nine studies were single center with no external validation. Sample sizes were small (median 142 cases; range 16-302). The only randomized controlled trial-INSPIRE (neurophysiology with imaging correlation)-demonstrated high technical performance (≈92% accuracy; AIGERS 0.94 for AI-assisted vs. 0.70 for AI-only, p < 0.001) but showed no measurable clinical benefit when physicians used AI assistance compared with independent reporting. Safety assessments were absent from all studies. Agentic AI in neuroradiology remains technically promising but clinically unproven. Severe evidence scarcity (3.90% inclusion rate), frequent overextension of the "agentic" label (30% of studies lacked genuine autonomy), and the persistent gap between technical performance and clinical utility indicate that the field remains in its early research phase. Current evidence is insufficient to support clinical deployment. Rigorous, multi-center prospective trials with patient-centered and safety outcomes are essential before clinical implementation can be responsibly considered.
具有迭代推理、自主工具使用或多智能体协作的代理人工智能系统已被提出作为神经放射学中大型语言模型(llm)局限性的解决方案。然而,它们的实施程度和临床验证仍不清楚。我们系统地检索了PubMed、Web of Science和Scopus(2022年1月- 2025年8月),寻找在神经放射学中实施代理人工智能的研究。6名独立审稿人(3名医生和3名人工智能专家)评估了全文。人工智能被定义为需要强制迭代推理加上自主工具使用或多智能体协作。采用适应性QUADAS-AI标准评估研究质量。230条记录中,9项研究(3.90%)符合纳入标准。其中,5个(55.60%)实现了真正的多智能体架构,2个(22.20%)使用混合或概念框架,2个(22.20%)依赖于没有真正代理行为的单模型llm。所有9项研究均为单中心,没有外部验证。样本量较小(中位数142例,范围16-302例)。唯一的随机对照试验- inspire(神经生理学与成像相关性)-显示出高技术性能(≈92%的准确性;人工智能辅助的AIGERS为0.94,而人工智能单独的AIGERS为0.70,p
{"title":"Systematic Review: Agentic AI in Neuroradiology: Technical Promise with Limited Clinical Evidence.","authors":"Sara Salehi, Varekan Keishing, Yashbir Singh, David Wei, Amirali Khosravi, Parnian Habibi, Jaidip Jagtap, Bradley J Erickson","doi":"10.1007/s10278-025-01839-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01839-2","url":null,"abstract":"<p><p>Agentic artificial intelligence systems featuring iterative reasoning, autonomous tool use, or multi-agent collaboration have been proposed as solutions to the limitations of large language models (LLMs) in neuroradiology. However, the extent of their implementation and clinical validation remains unclear. We systematically searched PubMed, Web of Science, and Scopus (January 2022-August 2025) for studies implementing agentic AI in neuroradiology. Six independent reviewers (three medical doctors and three AI specialists) assessed full texts. Agentic AI was defined as requiring mandatory iterative reasoning plus either autonomous tool use or multi-agent collaboration. Study quality was evaluated using adapted QUADAS-AI criteria. From 230 records, 9 studies (3.90%) met inclusion criteria. Of these, five (55.60%) implemented true multi-agent architecture, two (22.20%) used hybrid or conceptual frameworks, and two (22.20%) relied on single-model LLMs without genuine agentic behavior. All nine studies were single center with no external validation. Sample sizes were small (median 142 cases; range 16-302). The only randomized controlled trial-INSPIRE (neurophysiology with imaging correlation)-demonstrated high technical performance (≈92% accuracy; AIGERS 0.94 for AI-assisted vs. 0.70 for AI-only, p < 0.001) but showed no measurable clinical benefit when physicians used AI assistance compared with independent reporting. Safety assessments were absent from all studies. Agentic AI in neuroradiology remains technically promising but clinically unproven. Severe evidence scarcity (3.90% inclusion rate), frequent overextension of the \"agentic\" label (30% of studies lacked genuine autonomy), and the persistent gap between technical performance and clinical utility indicate that the field remains in its early research phase. Current evidence is insufficient to support clinical deployment. Rigorous, multi-center prospective trials with patient-centered and safety outcomes are essential before clinical implementation can be responsibly considered.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}