Pub Date : 2026-01-16DOI: 10.1016/j.jacr.2026.01.004
Jeshwanth Mohan, Alexander Lam
{"title":"Patient-Friendly Summary of the ACR Appropriateness Criteria®: Lower Extremity Arterial Claudication-Imaging Assessment for Revascularization.","authors":"Jeshwanth Mohan, Alexander Lam","doi":"10.1016/j.jacr.2026.01.004","DOIUrl":"10.1016/j.jacr.2026.01.004","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.jacr.2025.12.014
Ayushi Gupta, Victoria Chernyak, Kevin J Chang, Priya R Bhosale, Brooks D Cash, Katherine R Hall, Michael Magnetta, Joseph H Yacoub, Katherine Zukotynski, Elena K Korngold
Chronic pancreatitis (CP) is a progressive disorder of the pancreas characterized by irreversible parenchymal and ductal changes, leading to chronic pain and pancreatic insufficiency. Its impact on quality of life can be profound and may further be complicated by acute inflammation superimposed on CP (ACP), potentially accelerating functional decline and increasing morbidity. Imaging plays an important role in diagnosing both CP and ACP, determining severity, identifying underlying causes, and detecting complications. CT is particularly effective at detecting parenchymal calcifications, typically seen in later disease, and for rapid evaluation of ACP. MRI with MRCP is more sensitive for early ductal and parenchymal changes, with secretin-stimulated MRCP further improving detection in mild or early disease. In certain cases, endoscopic ultrasound adds diagnostic value and offers therapeutic intervention. Early and accurate imaging, paired with clinical and laboratory evaluation, is essential for guiding effective patient care in CP. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision process support the systematic analysis of the medical literature from peer reviewed journals. Established methodology principles such as Grading of Recommendations Assessment, Development, and Evaluation or GRADE are adapted to evaluate the evidence. The RAND/UCLA Appropriateness Method User Manual provides the methodology to determine the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where peer reviewed literature is lacking or equivocal, experts may be the primary evidentiary source available to formulate a recommendation.
{"title":"ACR Appropriateness Criteria® Chronic Pancreatitis.","authors":"Ayushi Gupta, Victoria Chernyak, Kevin J Chang, Priya R Bhosale, Brooks D Cash, Katherine R Hall, Michael Magnetta, Joseph H Yacoub, Katherine Zukotynski, Elena K Korngold","doi":"10.1016/j.jacr.2025.12.014","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.12.014","url":null,"abstract":"<p><p>Chronic pancreatitis (CP) is a progressive disorder of the pancreas characterized by irreversible parenchymal and ductal changes, leading to chronic pain and pancreatic insufficiency. Its impact on quality of life can be profound and may further be complicated by acute inflammation superimposed on CP (ACP), potentially accelerating functional decline and increasing morbidity. Imaging plays an important role in diagnosing both CP and ACP, determining severity, identifying underlying causes, and detecting complications. CT is particularly effective at detecting parenchymal calcifications, typically seen in later disease, and for rapid evaluation of ACP. MRI with MRCP is more sensitive for early ductal and parenchymal changes, with secretin-stimulated MRCP further improving detection in mild or early disease. In certain cases, endoscopic ultrasound adds diagnostic value and offers therapeutic intervention. Early and accurate imaging, paired with clinical and laboratory evaluation, is essential for guiding effective patient care in CP. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision process support the systematic analysis of the medical literature from peer reviewed journals. Established methodology principles such as Grading of Recommendations Assessment, Development, and Evaluation or GRADE are adapted to evaluate the evidence. The RAND/UCLA Appropriateness Method User Manual provides the methodology to determine the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where peer reviewed literature is lacking or equivocal, experts may be the primary evidentiary source available to formulate a recommendation.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.jacr.2025.12.013
Mariana L Meyers, Michael M Moore, Joe B Baker, Michael N Clemenshaw, Matthew L Cooper, Matthew R Hammer, Susan D John, Afif Kulaylat, Joyce Li, Sagar J Pathak, Jonathan D Samet, Marla B K Sammer, Gary R Schooler, Amit S Sura, Catharine M Walsh, Ramesh S Iyer
Ingestion or aspiration of foreign bodies (FB) is a common reason for pediatric emergency department visits. Three variants were developed: 1) Variant 1 (suspect ingested or aspirated FB, initial imaging), neck, chest, abdomen, and pelvis radiographs are usually appropriate to identify the presence and location of a swallowed or inhaled FB. Low-dose noncontrast chest CT may also be appropriate when there is high suspicion for radiolucent FB; 2) Variant 2 (suspect ingested FB, initial radiographs negative, next imaging study), chest CT without contrast is usually appropriate, although fluoroscopic esophagram or CT abdomen and pelvis may be appropriate. If known ingested water beads, abdomen ultrasound may be helpful; 3) Variant 3 (suspect aspirated FB, initial radiographs negative, next imaging study), CT chest without contrast is usually appropriate, whereas decubitus chest radiographs and fluoroscopy studies are usually not appropriate. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision process support the systematic analysis of the medical literature from peer reviewed journals. Established methodology principles such as Grading of Recommendations Assessment, Development, and Evaluation or GRADE are adapted to evaluate the evidence. The RAND/UCLA Appropriateness Method User Manual provides the methodology to determine the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where peer reviewed literature is lacking or equivocal, experts may be the primary evidentiary source available to formulate a recommendation.
{"title":"ACR Appropriateness Criteria® Ingested or Aspirated Foreign Body-Child.","authors":"Mariana L Meyers, Michael M Moore, Joe B Baker, Michael N Clemenshaw, Matthew L Cooper, Matthew R Hammer, Susan D John, Afif Kulaylat, Joyce Li, Sagar J Pathak, Jonathan D Samet, Marla B K Sammer, Gary R Schooler, Amit S Sura, Catharine M Walsh, Ramesh S Iyer","doi":"10.1016/j.jacr.2025.12.013","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.12.013","url":null,"abstract":"<p><p>Ingestion or aspiration of foreign bodies (FB) is a common reason for pediatric emergency department visits. Three variants were developed: 1) Variant 1 (suspect ingested or aspirated FB, initial imaging), neck, chest, abdomen, and pelvis radiographs are usually appropriate to identify the presence and location of a swallowed or inhaled FB. Low-dose noncontrast chest CT may also be appropriate when there is high suspicion for radiolucent FB; 2) Variant 2 (suspect ingested FB, initial radiographs negative, next imaging study), chest CT without contrast is usually appropriate, although fluoroscopic esophagram or CT abdomen and pelvis may be appropriate. If known ingested water beads, abdomen ultrasound may be helpful; 3) Variant 3 (suspect aspirated FB, initial radiographs negative, next imaging study), CT chest without contrast is usually appropriate, whereas decubitus chest radiographs and fluoroscopy studies are usually not appropriate. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision process support the systematic analysis of the medical literature from peer reviewed journals. Established methodology principles such as Grading of Recommendations Assessment, Development, and Evaluation or GRADE are adapted to evaluate the evidence. The RAND/UCLA Appropriateness Method User Manual provides the methodology to determine the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where peer reviewed literature is lacking or equivocal, experts may be the primary evidentiary source available to formulate a recommendation.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.jacr.2025.12.027
Christopher Keshishian, Andrew Pidutti, Spencer Rice, Dana Dabson, Angela Mihalic, Josephina Anna Vossen
Purpose: To evaluate the association between preference signaling and residency application behaviors in diagnostic radiology across presignal and postsignal eras and to compare outcomes between 6-signal and 12-signal cycles.
Methods: We analyzed self-reported data from matched diagnostic radiology residency applicants in the Texas Seeking Transparency in Application to Residency (STAR) database from the 2019 to 2025 application cycles. Primary outcomes included the number of applications submitted, interview offers received, and interviews attended. Comparisons were made between the presignal (2019-2022) and postsignal (2023-2025) eras, as well as between the 6-signal (2023) and 12-signal (2024-2025) cycles. Two-sample t tests were used for group comparisons.
Results: A total of 1,371 matched diagnostic radiology applicants met inclusion criteria. Applicants in the postsignal era submitted significantly more applications than those in the presignal era (56.0 versus 45.6, P < .001). They also received fewer interview offers (13.7 versus 17.3, P < .001) and attended slightly fewer interviews (14.1 versus 15.1, P = .0014). When comparing signaling structures, applications submitted did not differ significantly between the 6-signal and 12-signal cycles (57.3 versus 55.1, P = .462), nor did interview offers (13.2 versus 14.0, P = .127) or interviews attended (13.7 versus 14.3, P = .194). Outcomes were stable between the two 12-signal years (2024 and 2025), with no statistically significant year-to-year differences.
Conclusion: Preference signaling was associated with significant differences between the presignal and postsignal eras, but increasing the number of signals from 6 to 12 did not meaningfully change outcomes. Application and interview patterns remained stable across two consecutive 12-signal cycles. Further work is needed to understand how signals influence application review processes and to guide ongoing implementation.
{"title":"The Signal Effect: Changes in Applicant and Program Behavior in the Diagnostic Radiology Application Process.","authors":"Christopher Keshishian, Andrew Pidutti, Spencer Rice, Dana Dabson, Angela Mihalic, Josephina Anna Vossen","doi":"10.1016/j.jacr.2025.12.027","DOIUrl":"10.1016/j.jacr.2025.12.027","url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate the association between preference signaling and residency application behaviors in diagnostic radiology across presignal and postsignal eras and to compare outcomes between 6-signal and 12-signal cycles.</p><p><strong>Methods: </strong>We analyzed self-reported data from matched diagnostic radiology residency applicants in the Texas Seeking Transparency in Application to Residency (STAR) database from the 2019 to 2025 application cycles. Primary outcomes included the number of applications submitted, interview offers received, and interviews attended. Comparisons were made between the presignal (2019-2022) and postsignal (2023-2025) eras, as well as between the 6-signal (2023) and 12-signal (2024-2025) cycles. Two-sample t tests were used for group comparisons.</p><p><strong>Results: </strong>A total of 1,371 matched diagnostic radiology applicants met inclusion criteria. Applicants in the postsignal era submitted significantly more applications than those in the presignal era (56.0 versus 45.6, P < .001). They also received fewer interview offers (13.7 versus 17.3, P < .001) and attended slightly fewer interviews (14.1 versus 15.1, P = .0014). When comparing signaling structures, applications submitted did not differ significantly between the 6-signal and 12-signal cycles (57.3 versus 55.1, P = .462), nor did interview offers (13.2 versus 14.0, P = .127) or interviews attended (13.7 versus 14.3, P = .194). Outcomes were stable between the two 12-signal years (2024 and 2025), with no statistically significant year-to-year differences.</p><p><strong>Conclusion: </strong>Preference signaling was associated with significant differences between the presignal and postsignal eras, but increasing the number of signals from 6 to 12 did not meaningfully change outcomes. Application and interview patterns remained stable across two consecutive 12-signal cycles. Further work is needed to understand how signals influence application review processes and to guide ongoing implementation.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.jacr.2025.12.023
Haley I Tupper, Joseph B Shrager, Drew Moghanaki, Charles B Simone, Ella A Kazerooni, Eric M Hart, David T Cooke, Jeffrey B Velotta, Betty C Tong, Hari B Keshava, Cherie P Erkmen, Chi-Fu J Yang, Elliot L Servais
{"title":"Misinformation and Overestimation of Computed Tomography Lung Cancer Screening Harms-Methodology Matters: A Joint Statement from The Society of Thoracic Surgeons, the American Society for Radiation Oncology, and the American College of Radiology.","authors":"Haley I Tupper, Joseph B Shrager, Drew Moghanaki, Charles B Simone, Ella A Kazerooni, Eric M Hart, David T Cooke, Jeffrey B Velotta, Betty C Tong, Hari B Keshava, Cherie P Erkmen, Chi-Fu J Yang, Elliot L Servais","doi":"10.1016/j.jacr.2025.12.023","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.12.023","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.jacr.2025.12.025
Amina M Karage, Jeffrey Forris Beecham Chick, David S Shin, Mina S Makary, Jessica B Robbins, Eric J Monroe
Purpose: To evaluate attrition in integrated interventional radiology (IR) residency compared with the independent IR, diagnostic radiology (DR), integrated programs, and other residency programs.
Materials and methods: This study was conducted based on publicly available data published by the ACGME from 2017 to 2023. Residents were categorized as having transferred, withdrawn, dismissed, unsuccessfully completed, or deceased. Attrition was calculated by dividing the total number of departed residents by the total number of residents within a given specialty. Data for the IR residency was compared with other residency programs during the same period. Odds ratios (ORs) and P values were calculated using multivariate logistic regression and χ2 test. A P value < .05 was considered significant.
Results: Attrition for the integrated IR residency ranged from 2.3% to 5.1% during the 6-year period. Third-year IR (IR3) residents had the highest attrition at 6.70%. Attrition rates were significantly different between resident years, driven by a higher rate of attrition in resident year (RY)3 compared with RY1, RY2, and RY4. The odds of attrition in the independent IR and DR programs were, respectively, 77.6% and 52.9%, lower compared with the integrated IR program. The integrated IR residency attrition rates were comparable to general surgery (OR = 0.835; P = .178) and integrated thoracic surgery (OR = 0.751; P = .178) and higher than family medicine (OR = P < .001), neurological surgery (P < .001), integrated vascular surgery (P < .001), obstetrics and gynecology (P < .001), internal medicine (P < .001), orthopedic surgery (P < .001) and emergency medicine (P < .001) residency programs.
Conclusion: Findings from this study outline elevated attrition within the integrated IR programs. This study highlights a need for additional research to identify risk factors and potential interventions to improve medical student education, resident support and retention.
{"title":"Attrition in the Integrated Interventional Radiology Residency Program: 2017-2023.","authors":"Amina M Karage, Jeffrey Forris Beecham Chick, David S Shin, Mina S Makary, Jessica B Robbins, Eric J Monroe","doi":"10.1016/j.jacr.2025.12.025","DOIUrl":"10.1016/j.jacr.2025.12.025","url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate attrition in integrated interventional radiology (IR) residency compared with the independent IR, diagnostic radiology (DR), integrated programs, and other residency programs.</p><p><strong>Materials and methods: </strong>This study was conducted based on publicly available data published by the ACGME from 2017 to 2023. Residents were categorized as having transferred, withdrawn, dismissed, unsuccessfully completed, or deceased. Attrition was calculated by dividing the total number of departed residents by the total number of residents within a given specialty. Data for the IR residency was compared with other residency programs during the same period. Odds ratios (ORs) and P values were calculated using multivariate logistic regression and χ<sup>2</sup> test. A P value < .05 was considered significant.</p><p><strong>Results: </strong>Attrition for the integrated IR residency ranged from 2.3% to 5.1% during the 6-year period. Third-year IR (IR3) residents had the highest attrition at 6.70%. Attrition rates were significantly different between resident years, driven by a higher rate of attrition in resident year (RY)3 compared with RY1, RY2, and RY4. The odds of attrition in the independent IR and DR programs were, respectively, 77.6% and 52.9%, lower compared with the integrated IR program. The integrated IR residency attrition rates were comparable to general surgery (OR = 0.835; P = .178) and integrated thoracic surgery (OR = 0.751; P = .178) and higher than family medicine (OR = P < .001), neurological surgery (P < .001), integrated vascular surgery (P < .001), obstetrics and gynecology (P < .001), internal medicine (P < .001), orthopedic surgery (P < .001) and emergency medicine (P < .001) residency programs.</p><p><strong>Conclusion: </strong>Findings from this study outline elevated attrition within the integrated IR programs. This study highlights a need for additional research to identify risk factors and potential interventions to improve medical student education, resident support and retention.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.jacr.2025.12.026
Haniyeh Zamani, Tom Fruscello, Judy Burleson, Mythreyi Bhargavan-Chatfield, Matthew S Davenport
Purpose: To determine changes in site- and radiologist-specific imaging volumes before, during, and after the COVID-19 pandemic from a large, diverse sample of US radiology practices.
Methods: Imaging volumes (46.4 million examinations) for 1,571 unique radiologists and 167 United States radiology practices in 19 states (academic [n ≤ 5], community hospital [n = 71], multispecialty clinic [n ≤ 5], freestanding imaging center [n = 86], other [n ≤ 5]) participating in the ACR General Radiography Improvement Database (part of the National Radiology Data Registry, which is a CMS-approved Qualified Clinical Data Registry) from December 1, 2017, to February 29, 2024, were analyzed. These dates included baseline, pandemic, and postpandemic periods. Six modalities were analyzed: CT, mammography, MRI, x-ray, ultrasound, and PET-CT. National Provider Identifiers (NPIs) were used to track individual radiologists. Changes in workforce number (by NPI), workload (examinations per day), NPI attrition, and NPI turnover were calculated by quarter.
Results: Of 1,571 radiologists, 671 (43%) worked full time and read ≥100 examinations per quarter throughout the baseline period and final study quarter. Mean aggregate change in examinations read per day per radiologist from baseline to study end was modest (+0.6% [49.1 per day to 49.4 per day]). However, the top quartile radiologists (by examination volume) experienced meaningful increases in examinations per day (+30.6%; 73.9 examinations per day versus 56.6 examinations per day [baseline]) and clinical days worked per quarter (+19.7%; 46.2 days per quarter versus 38.6 days per quarter [baseline]). The number of working radiologists increased 23.6% (1,094 versus 885 [baseline]) with substantial turnover. Days worked per radiologist per quarter remained similar (37.2 days versus 39.1 days, -4.9%). Sharp examination declines during the COVID-19 pandemic were not associated with large reductions in the radiologist workforce.
Conclusion: Although the average radiologist read a similar number of examinations per day pre- versus postpandemic, the top quartile radiologists read 30.6% more examinations per day and work 19.7% more clinical shifts per quarter compared with the first quarter of 2018.
{"title":"US Radiology Imaging and Workforce Volumes 2017-2024: An Analysis of 46.4 Million Imaging Examinations From 167 Radiology Facilities.","authors":"Haniyeh Zamani, Tom Fruscello, Judy Burleson, Mythreyi Bhargavan-Chatfield, Matthew S Davenport","doi":"10.1016/j.jacr.2025.12.026","DOIUrl":"10.1016/j.jacr.2025.12.026","url":null,"abstract":"<p><strong>Purpose: </strong>To determine changes in site- and radiologist-specific imaging volumes before, during, and after the COVID-19 pandemic from a large, diverse sample of US radiology practices.</p><p><strong>Methods: </strong>Imaging volumes (46.4 million examinations) for 1,571 unique radiologists and 167 United States radiology practices in 19 states (academic [n ≤ 5], community hospital [n = 71], multispecialty clinic [n ≤ 5], freestanding imaging center [n = 86], other [n ≤ 5]) participating in the ACR General Radiography Improvement Database (part of the National Radiology Data Registry, which is a CMS-approved Qualified Clinical Data Registry) from December 1, 2017, to February 29, 2024, were analyzed. These dates included baseline, pandemic, and postpandemic periods. Six modalities were analyzed: CT, mammography, MRI, x-ray, ultrasound, and PET-CT. National Provider Identifiers (NPIs) were used to track individual radiologists. Changes in workforce number (by NPI), workload (examinations per day), NPI attrition, and NPI turnover were calculated by quarter.</p><p><strong>Results: </strong>Of 1,571 radiologists, 671 (43%) worked full time and read ≥100 examinations per quarter throughout the baseline period and final study quarter. Mean aggregate change in examinations read per day per radiologist from baseline to study end was modest (+0.6% [49.1 per day to 49.4 per day]). However, the top quartile radiologists (by examination volume) experienced meaningful increases in examinations per day (+30.6%; 73.9 examinations per day versus 56.6 examinations per day [baseline]) and clinical days worked per quarter (+19.7%; 46.2 days per quarter versus 38.6 days per quarter [baseline]). The number of working radiologists increased 23.6% (1,094 versus 885 [baseline]) with substantial turnover. Days worked per radiologist per quarter remained similar (37.2 days versus 39.1 days, -4.9%). Sharp examination declines during the COVID-19 pandemic were not associated with large reductions in the radiologist workforce.</p><p><strong>Conclusion: </strong>Although the average radiologist read a similar number of examinations per day pre- versus postpandemic, the top quartile radiologists read 30.6% more examinations per day and work 19.7% more clinical shifts per quarter compared with the first quarter of 2018.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.jacr.2025.12.028
Vera Sorin, Panagiotis Korfiatis, Steve G Langer, Lewis D Hahn, Alex K Bratt, Cole J Cook, Joe D Sobek, Crystal L Butler, Christoph Wald, Bradley J Erickson, Jeremy D Collins
Objective: AI models are increasingly adopted in clinical practice, yet their generalizability outside controlled validation settings remains unclear. We aimed to evaluate the real-world performance of an FDA-cleared commercial pulmonary embolism (PE) detection model and identify technical, demographic, and clinical factors associated with performance variation, to inform postproduction monitoring and deployment strategies.
Methods: This retrospective study included 11,144 CT pulmonary angiography examinations performed in a single health system between April 2023 and June 2024, processed by a commercial PE detection model. Technical parameters (scanner manufacturer, slice thickness, dose index volume, contrast enhancement of pulmonary artery), demographic factors (age, gender, race, body mass index), and clinical comorbidities (heart failure, pulmonary hypertension, cancer) were extracted from DICOM headers and electronic health records. Univariate and multivariable logistic regression analyses identified factors associated with decreased performance.
Results: There were 1,193 of 11,144 (10.7%) PE-positive cases. The model had an overall 83.5% (95% confidence interval [CI] 81.3%-85.5%) sensitivity and positive predictive value was 90.5% (95% CI 88.7%-92.1%). Multivariable analysis showed significant associations between decreased sensitivity and scanner manufacturer (odds ratio [OR] 0.25, 95% CI 0.14-0.46 and OR 0.34, 95% CI 0.17-0.69, for different vendors versus reference, P < .003), increased slice thickness (OR 0.74, 95% CI 0.57-0.95 per 1-mm increase, P = .018), presence of imaging artifacts (OR 0.33, 95% CI 0.23-0.48, P < .001), heart failure (OR 0.58, 95% CI 0.38-0.88, P = .010), and pulmonary hypertension (OR 0.44, 95% CI 0.25-0.77, P = .004). Demographic factors including age, gender, race, and body mass index showed no significant associations with model performance.
Conclusion: AI performance in clinical practice varies significantly based on technical imaging parameters and patient comorbidities. Understanding these factors is essential for optimal product selection and for effective postdeployment monitoring, enabling investigation of model drift in evolving clinical settings. The findings highlight the need for local validation frameworks that account for institution-specific technical infrastructure and patient populations, to ensure safe AI deployment across diverse clinical environments.
{"title":"Factors Impacting the Performance of Deep Learning Detection of Pulmonary Emboli.","authors":"Vera Sorin, Panagiotis Korfiatis, Steve G Langer, Lewis D Hahn, Alex K Bratt, Cole J Cook, Joe D Sobek, Crystal L Butler, Christoph Wald, Bradley J Erickson, Jeremy D Collins","doi":"10.1016/j.jacr.2025.12.028","DOIUrl":"10.1016/j.jacr.2025.12.028","url":null,"abstract":"<p><strong>Objective: </strong>AI models are increasingly adopted in clinical practice, yet their generalizability outside controlled validation settings remains unclear. We aimed to evaluate the real-world performance of an FDA-cleared commercial pulmonary embolism (PE) detection model and identify technical, demographic, and clinical factors associated with performance variation, to inform postproduction monitoring and deployment strategies.</p><p><strong>Methods: </strong>This retrospective study included 11,144 CT pulmonary angiography examinations performed in a single health system between April 2023 and June 2024, processed by a commercial PE detection model. Technical parameters (scanner manufacturer, slice thickness, dose index volume, contrast enhancement of pulmonary artery), demographic factors (age, gender, race, body mass index), and clinical comorbidities (heart failure, pulmonary hypertension, cancer) were extracted from DICOM headers and electronic health records. Univariate and multivariable logistic regression analyses identified factors associated with decreased performance.</p><p><strong>Results: </strong>There were 1,193 of 11,144 (10.7%) PE-positive cases. The model had an overall 83.5% (95% confidence interval [CI] 81.3%-85.5%) sensitivity and positive predictive value was 90.5% (95% CI 88.7%-92.1%). Multivariable analysis showed significant associations between decreased sensitivity and scanner manufacturer (odds ratio [OR] 0.25, 95% CI 0.14-0.46 and OR 0.34, 95% CI 0.17-0.69, for different vendors versus reference, P < .003), increased slice thickness (OR 0.74, 95% CI 0.57-0.95 per 1-mm increase, P = .018), presence of imaging artifacts (OR 0.33, 95% CI 0.23-0.48, P < .001), heart failure (OR 0.58, 95% CI 0.38-0.88, P = .010), and pulmonary hypertension (OR 0.44, 95% CI 0.25-0.77, P = .004). Demographic factors including age, gender, race, and body mass index showed no significant associations with model performance.</p><p><strong>Conclusion: </strong>AI performance in clinical practice varies significantly based on technical imaging parameters and patient comorbidities. Understanding these factors is essential for optimal product selection and for effective postdeployment monitoring, enabling investigation of model drift in evolving clinical settings. The findings highlight the need for local validation frameworks that account for institution-specific technical infrastructure and patient populations, to ensure safe AI deployment across diverse clinical environments.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.jacr.2025.12.024
Antonio Verdone, Aidan Cardall, Fardeen Siddiqui, Motaz Nashawaty, Danielle Rigau, Youngjoon Kwon, Mira Yousef, Shalin Patel, Alex Kieturakis, Eric Kim, Laura Heacock, Beatriu Reig, Yiqiu Shen
Objective: Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant Generative Pretrained Transformer (GPT)-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings.
Methods: We analyzed 5,000 resident-attending report pairs from routine practice at a multisite US health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Interreader reliability was measured with Krippendorff's α. Educational value was measured as the proportion of cases rated helpful.
Results: Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% (Cohen's κ: 0.790, 0.550, and 0.615) across error types. Interreader reliability among all eight readers showed moderate to substantial variability (α = 0.767, 0.595, 0.567). When each reader was individually replaced with GPT-4o and interreader agreement among seven readers and GPT was recalculated, the effect was not statistically significant (Δ = -0.004 to 0.002, all P > .05). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%.
Discussion: ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.
{"title":"Evaluating Generative Artificial Intelligence as an Educational Tool for Radiology Resident Report Drafting.","authors":"Antonio Verdone, Aidan Cardall, Fardeen Siddiqui, Motaz Nashawaty, Danielle Rigau, Youngjoon Kwon, Mira Yousef, Shalin Patel, Alex Kieturakis, Eric Kim, Laura Heacock, Beatriu Reig, Yiqiu Shen","doi":"10.1016/j.jacr.2025.12.024","DOIUrl":"10.1016/j.jacr.2025.12.024","url":null,"abstract":"<p><strong>Objective: </strong>Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant Generative Pretrained Transformer (GPT)-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings.</p><p><strong>Methods: </strong>We analyzed 5,000 resident-attending report pairs from routine practice at a multisite US health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Interreader reliability was measured with Krippendorff's α. Educational value was measured as the proportion of cases rated helpful.</p><p><strong>Results: </strong>Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% (Cohen's κ: 0.790, 0.550, and 0.615) across error types. Interreader reliability among all eight readers showed moderate to substantial variability (α = 0.767, 0.595, 0.567). When each reader was individually replaced with GPT-4o and interreader agreement among seven readers and GPT was recalculated, the effect was not statistically significant (Δ = -0.004 to 0.002, all P > .05). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%.</p><p><strong>Discussion: </strong>ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145844501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.jacr.2025.12.021
Sheena Y Chu, Elizabeth A Briel, Lu Mao, John W Garrett, Scott B Reeder, Ali Pirasteh
{"title":"Impact of Online Safety Screening on Outpatient MRI Workflow.","authors":"Sheena Y Chu, Elizabeth A Briel, Lu Mao, John W Garrett, Scott B Reeder, Ali Pirasteh","doi":"10.1016/j.jacr.2025.12.021","DOIUrl":"10.1016/j.jacr.2025.12.021","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145795711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}