Pub Date : 2025-04-17DOI: 10.1067/j.cpradiol.2025.04.006
Alexander Antigua Made BA , Mahan Mathur MD
The American Board of Radiology's new Diagnostic Radiology Oral Examination emphasizes clinical decision-making, communication, and critical thinking. Traditional "hot seat" case conferences remain a valuable way to prepare residents for these requirements. Incorporating game-based formats into these case conferences, such as “Who Wants to Be a Millionaire?”, “Jeopardy!”, “Family Feud,” and “Kahoot!”, can make learning more dynamic and interactive. This review provides practical strategies for integrating these methods into radiology case conferences to enhance resident training and engagement.
{"title":"Level up your radiology case conferences: Preparing residents for success in oral board examinations using gamification","authors":"Alexander Antigua Made BA , Mahan Mathur MD","doi":"10.1067/j.cpradiol.2025.04.006","DOIUrl":"10.1067/j.cpradiol.2025.04.006","url":null,"abstract":"<div><div>The American Board of Radiology's new Diagnostic Radiology Oral Examination emphasizes clinical decision-making, communication, and critical thinking. Traditional \"hot seat\" case conferences remain a valuable way to prepare residents for these requirements. Incorporating game-based formats into these case conferences, such as “Who Wants to Be a Millionaire?”, “Jeopardy!”, “Family Feud,” and “Kahoot!”, can make learning more dynamic and interactive. This review provides practical strategies for integrating these methods into radiology case conferences to enhance resident training and engagement.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 5","pages":"Pages 579-584"},"PeriodicalIF":1.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144038672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1067/j.cpradiol.2025.04.015
Jada Hislop MD , Jasmine Locklin MD , Chris Ho MD , Elizabeth A. Krupinski PhD , Charnaye R. Bosley , Timothy Arleo MD , Nadja Kadom MD
Purpose
Assess patient-centered revisions to our institution’s screening mammography letters for BIRADS-0 and BIRADS-0 dense breast employing existing validated readability and usability rating instruments.
Methods/approach
Cross-sectional analysis of two different mammography recall letters used by our institution revised to be patient-centered: the mammography recall letter (BIRADS 0) and the recall letter for patients with dense breasts (BIRADS 0-DB). During the editorial stage, we used chat GPT v3.5 and the Flesch-Kincaid grade level (FKGL). After updates to the layout and addition of visuals, the letters were rated by professional subject-matter experts (SME) for understandability and actionability using the Patient Education Materials Assessment Tool (PEMAT). The letters were then evaluated by patients for comprehensibility, utility, and design using the Consumer Information Rating Form (CIRF). Descriptive statistics were calculated for each assessment.
Results
Baseline BI-RADS 0 and BI-RADS 0-DB letter FKGL levels were 11.9 and 10.7, respectively; after iterative revision the FKGL were 6.7 and 5.8, respectively. PEMAT ratings for understandability for the BI-RADS 0 recall letter improved from 41 to 90 % after the revision, and for actionability improved from 50 to 88 %. The understandability for the revised BI-RADS 0-DB letter improved from 46 to 85 % and actionability improved from 44 to 73 %. CIRF ratings indicated significant value for the added images in the BIRADS-0-DB letter.
Conclusion
Use of validated and established assessment tools confirmed that our new breast imaging letters were improved in terms of readability, understandability/comprehensibility, actionability, utility, and design. The process now serves as a pipeline for future revisions to documents that our department is sharing with patients
{"title":"Quality improvement project: Patient-centered breast imaging letters","authors":"Jada Hislop MD , Jasmine Locklin MD , Chris Ho MD , Elizabeth A. Krupinski PhD , Charnaye R. Bosley , Timothy Arleo MD , Nadja Kadom MD","doi":"10.1067/j.cpradiol.2025.04.015","DOIUrl":"10.1067/j.cpradiol.2025.04.015","url":null,"abstract":"<div><h3>Purpose</h3><div>Assess patient-centered revisions to our institution’s screening mammography letters for BIRADS-0 and BIRADS-0 dense breast employing existing validated readability and usability rating instruments.</div></div><div><h3>Methods/approach</h3><div>Cross-sectional analysis of two different mammography recall letters used by our institution revised to be patient-centered: the mammography recall letter (BIRADS 0) and the recall letter for patients with dense breasts (BIRADS 0-DB). During the editorial stage, we used chat GPT v3.5 and the Flesch-Kincaid grade level (FKGL). After updates to the layout and addition of visuals, the letters were rated by professional subject-matter experts (SME) for understandability and actionability using the Patient Education Materials Assessment Tool (PEMAT). The letters were then evaluated by patients for comprehensibility, utility, and design using the Consumer Information Rating Form (CIRF). Descriptive statistics were calculated for each assessment.</div></div><div><h3>Results</h3><div>Baseline BI-RADS 0 and BI-RADS 0-DB letter FKGL levels were 11.9 and 10.7, respectively; after iterative revision the FKGL were 6.7 and 5.8, respectively. PEMAT ratings for understandability for the BI-RADS 0 recall letter improved from 41 to 90 % after the revision, and for actionability improved from 50 to 88 %. The understandability for the revised BI-RADS 0-DB letter improved from 46 to 85 % and actionability improved from 44 to 73 %. CIRF ratings indicated significant value for the added images in the BIRADS-0-DB letter.</div></div><div><h3>Conclusion</h3><div>Use of validated and established assessment tools confirmed that our new breast imaging letters were improved in terms of readability, understandability/comprehensibility, actionability, utility, and design. The process now serves as a pipeline for future revisions to documents that our department is sharing with patients</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 5","pages":"Pages 608-615"},"PeriodicalIF":1.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144000560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1067/j.cpradiol.2025.04.005
Olena O. Weaver , Alejandro Contreras , Ethan O. Cohen , Mary S. Guirguis , Megha M. Kapoor , Marion E. Scoggins , Rosa F. Hwang , Rosalind P. Candelaria , Wei T. Yang , Jennifer B. Dennison , Jia Sun , Gary J. Whitman
Objectives
To evaluate combined digital breast tomosynthesis and contrast-enhanced mammography (DBT/CEM) for predicting pectoralis muscle invasion.
Methods
This retrospective multi-reader cohort study included research patients who underwent combined DBT/CEM for breast cancer staging and had prepectoral masses. Images were independently reviewed by six fellowship-trained breast radiologists. Diagnostic performance, reader confidence, and inter-reader agreement were calculated for each image type/modality.
Results
Among 10 patients with prepectoral masses on DBT/CEM, muscle invasion was present in 3 and absent in 7. The overall diagnostic accuracy of DBT/CEM for PMI was 0.6 (range 0.4-0.9); for predefined radiologic signs it was 0.5-0.7 for low energy (LE) CEM, 0.4-0.7 for DBT, and 0.4-0.8 for recombined (RC) CEM. Muscle deformity on MLO views had the highest accuracy (0.7-0.8). On a scale of 1-3, mean radiologist confidence for combined DBT/CEM was 1.9 (1.5-2.3; SD=0.65). Median confidence ranged from 1.9 for RC to 2.2 for DBT. Per-case reader agreement was poor (K=-0.01) for DBT/CEM; poor to slight (K= -0.13-0.40, median 0.28) for RC; slight to fair (K = 0.04-0.43, median 0.27 and K = 0.02-0.42, median 0.19, respectively) for DBT and LE. In two patients with subpectoral breast implants CEM was accurate in PMI detection, while MRI had one false-positive result.
Conclusion
Combined DBT/CEM accuracy and inter-reader agreement are suboptimal for PMI evaluation, except in patients with breast implants. RC images marginally improve accuracy compared to LE images but have lowest radiologist confidence. DBT has lowest accuracy but highest confidence. Muscle deformity on MLO view was the most accurate sign.
Critical Relevance Statement
Combined DBT/CEM demonstrated suboptimal diagnostic accuracy, reader confidence, and inter-reader agreement for detecting pectoralis muscle invasion (PMI) in prepectoral breast cancer (BC) except for patients with subpectoral breast implants, where recombined images on implant-displaced CEM views performed better than MRI.
{"title":"Assessment of Pectoralis muscle invasion using combined DBT and contrast-enhanced mammography: Retrospective multi-reader study","authors":"Olena O. Weaver , Alejandro Contreras , Ethan O. Cohen , Mary S. Guirguis , Megha M. Kapoor , Marion E. Scoggins , Rosa F. Hwang , Rosalind P. Candelaria , Wei T. Yang , Jennifer B. Dennison , Jia Sun , Gary J. Whitman","doi":"10.1067/j.cpradiol.2025.04.005","DOIUrl":"10.1067/j.cpradiol.2025.04.005","url":null,"abstract":"<div><h3>Objectives</h3><div>To evaluate combined digital breast tomosynthesis and contrast-enhanced mammography (DBT/CEM) for predicting pectoralis muscle invasion.</div></div><div><h3>Methods</h3><div>This retrospective multi-reader cohort study included research patients who underwent combined DBT/CEM for breast cancer staging and had prepectoral masses. Images were independently reviewed by six fellowship-trained breast radiologists. Diagnostic performance, reader confidence, and inter-reader agreement were calculated for each image type/modality.</div></div><div><h3>Results</h3><div>Among 10 patients with prepectoral masses on DBT/CEM, muscle invasion was present in 3 and absent in 7. The overall diagnostic accuracy of DBT/CEM for PMI was 0.6 (range 0.4-0.9); for predefined radiologic signs it was 0.5-0.7 for low energy (LE) CEM, 0.4-0.7 for DBT, and 0.4-0.8 for recombined (RC) CEM. Muscle deformity on MLO views had the highest accuracy (0.7-0.8). On a scale of 1-3, mean radiologist confidence for combined DBT/CEM was 1.9 (1.5-2.3; SD=0.65). Median confidence ranged from 1.9 for RC to 2.2 for DBT. Per-case reader agreement was poor (<em>K</em>=-0.01) for DBT/CEM; poor to slight (<em>K</em>= -0.13-0.40, median 0.28) for RC; slight to fair (<em>K</em> = 0.04-0.43, median 0.27 and <em>K</em> = 0.02-0.42, median 0.19, respectively) for DBT and LE. In two patients with subpectoral breast implants CEM was accurate in PMI detection, while MRI had one false-positive result.</div></div><div><h3>Conclusion</h3><div>Combined DBT/CEM accuracy and inter-reader agreement are suboptimal for PMI evaluation, except in patients with breast implants. RC images marginally improve accuracy compared to LE images but have lowest radiologist confidence. DBT has lowest accuracy but highest confidence. Muscle deformity on MLO view was the most accurate sign.</div></div><div><h3>Critical Relevance Statement</h3><div>Combined DBT/CEM demonstrated suboptimal diagnostic accuracy, reader confidence, and inter-reader agreement for detecting pectoralis muscle invasion (PMI) in prepectoral breast cancer (BC) except for patients with subpectoral breast implants, where recombined images on implant-displaced CEM views performed better than MRI.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"55 1","pages":"Pages 95-104"},"PeriodicalIF":1.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143994967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1067/j.cpradiol.2025.04.007
Sakina Divan , Hebatullah M. Elsingergy , Arif Musa , Mohamed M. Elsingergy , Brigitte Berryhill , Gulcin Altinok
Diagnostic Radiology has emerged as an increasingly competitive specialty posing a significant challenge for aspirants, particularly for Doctors of Osteopathic Medicine (DOs) and International Medical Graduates (IMGs). This could be attributed to the field’s dynamic nature, flexibility of career paths, and high job demand. This article delves into a decade's worth of matching trends in diagnostic radiology, underscoring the unique obstacles faced by DOs and IMGs including possible implicit biases, logistical hurdles, and the implications of the USMLE Step 1′s transition to pass/fail scoring. It offers practical solutions to level the playing field, such as expanding clinical and research opportunities for applicants, encouraging residency programs to address implicit biases, increasing curriculum adaptability in osteopathic and foreign medical schools, and exploring accreditation reforms. Together, these recommendations aim to create a more equitable selection process and mitigate the systemic barriers DOs and IMGs face in securing highly sought-after radiology residency spots.
{"title":"A breakdown of how diagnostic radiology residency became increasingly competitive for US doctors of osteopathic medicine (DOs) and international medical graduates (IMGs)","authors":"Sakina Divan , Hebatullah M. Elsingergy , Arif Musa , Mohamed M. Elsingergy , Brigitte Berryhill , Gulcin Altinok","doi":"10.1067/j.cpradiol.2025.04.007","DOIUrl":"10.1067/j.cpradiol.2025.04.007","url":null,"abstract":"<div><div>Diagnostic Radiology has emerged as an increasingly competitive specialty posing a significant challenge for aspirants, particularly for Doctors of Osteopathic Medicine (DOs) and International Medical Graduates (IMGs). This could be attributed to the field’s dynamic nature, flexibility of career paths, and high job demand. This article delves into a decade's worth of matching trends in diagnostic radiology, underscoring the unique obstacles faced by DOs and IMGs including possible implicit biases, logistical hurdles, and the implications of the USMLE Step 1′s transition to pass/fail scoring. It offers practical solutions to level the playing field, such as expanding clinical and research opportunities for applicants, encouraging residency programs to address implicit biases, increasing curriculum adaptability in osteopathic and foreign medical schools, and exploring accreditation reforms. Together, these recommendations aim to create a more equitable selection process and mitigate the systemic barriers DOs and IMGs face in securing highly sought-after radiology residency spots.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"55 1","pages":"Pages 1-4"},"PeriodicalIF":1.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144014892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17DOI: 10.1067/j.cpradiol.2025.04.016
Ibrahim A Rehman BA, Richard B Gunderman MD PhD
A key component of the introduction of any medical innovation is education. Simply put, health professionals not involved in such an innovation’s development need to learn about it, and such education needs to be tailored to the needs of different learning communities, including those who will someday incorporate it into the care of their patients and those who will receive queries about it from patients and colleagues. Among such key groups are medical students and residents, and one such promising innovation is theranostics, a burgeoning field whose name is a portmanteau of therapeutics and diagnostics that combines targeted therapeutics with molecular imaging to deliver individualized care. The field is sufficiently new that it is not included in the curricula of many medical schools and residency programs, yet physicians in training need a basic understanding of its current and projected future role in healthcare. This article serves as such an introduction.
{"title":"Theranostics: A primer for medical students and residents","authors":"Ibrahim A Rehman BA, Richard B Gunderman MD PhD","doi":"10.1067/j.cpradiol.2025.04.016","DOIUrl":"10.1067/j.cpradiol.2025.04.016","url":null,"abstract":"<div><div>A key component of the introduction of any medical innovation is education. Simply put, health professionals not involved in such an innovation’s development need to learn about it, and such education needs to be tailored to the needs of different learning communities, including those who will someday incorporate it into the care of their patients and those who will receive queries about it from patients and colleagues. Among such key groups are medical students<span> and residents, and one such promising innovation is theranostics, a burgeoning field whose name is a portmanteau of therapeutics and diagnostics that combines targeted therapeutics with molecular imaging to deliver individualized care. The field is sufficiently new that it is not included in the curricula of many medical schools and residency programs, yet physicians in training need a basic understanding of its current and projected future role in healthcare. This article serves as such an introduction.</span></div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"55 2","pages":"Pages 226-228"},"PeriodicalIF":1.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11DOI: 10.1067/j.cpradiol.2024.10.035
Evie Nguyen , Christopher A. Dodoo MS , Imon Banerjee PhD , Fatima Al-Khafaji MBChB , Jacob A. Varner , Iridian Jaramillo MS , Meghana Nadella MS , Tyler M. Kuo , Zoe Deahl , Dyan G. DeYoung , Nelly Tan MD
Objective
We examined the feasibility of collecting timely patient feedback after outpatient magnetic resonance imaging (MRI) and the effect of radiology staff responses or actions on patient experience scores.
Methods
This study included 6043 patients who completed a feedback survey via email after undergoing outpatient MRI at a tertiary care medical center between April 2021 and September 2022. The survey consisted of the question “How was your radiology visit?” with a 5-point emoji-Likert scale, an open-text feedback box, and an option to request a response. The primary outcome measure analyzed was the “top box” score (ie, the percentage of 5/5 scores) reflecting overall patient satisfaction. For comparison, Press Ganey quarterly top box scores from a separate group of patients who underwent outpatient MRI concurrent with the study period were also analyzed. Patient-reported feedback was categorized by using natural language processing and analyzed along with radiology staff responses and actions.
Results
The top box score for “How was your radiology visit?” increased from 81.1% during the first month of the study to 86.1% during the last month. Similarly, the comparative Press Ganey top box scores for questions related to “radiology staff concern for comfort” and “courtesy of radiology technologist” increased from the first quarter to the last quarter of the study. Patients reported service excellence in 59.2% of surveys (n=3576), long wait time in 6.3% (n=383), and poor communication in 6.1% (n=369). Some praise from patients was shared with staff members who interacted with the patients. Of all survey responses, 5.5% required radiology staff responses or actions, such as sharing feedback with supervisors, providing direct feedback to staff, and making telephone calls to patients. From the first half to the second half of the study, the median (IQR) wait time decreased from 46 (32–66) minutes to 45 (31–64) minutes (P=.02), and the percentage of patients who reported long wait time decreased from 7.4% to 5.4% (P=.002).
Conclusion
Our study highlights the feasibility of obtaining timely patient feedback after outpatient MRI and using it to improve patient experience. These results may contribute to the development of more patient-centered care in the field of radiology.
{"title":"Effects of patient survey feedback on improving patient experience with outpatient magnetic resonance imaging","authors":"Evie Nguyen , Christopher A. Dodoo MS , Imon Banerjee PhD , Fatima Al-Khafaji MBChB , Jacob A. Varner , Iridian Jaramillo MS , Meghana Nadella MS , Tyler M. Kuo , Zoe Deahl , Dyan G. DeYoung , Nelly Tan MD","doi":"10.1067/j.cpradiol.2024.10.035","DOIUrl":"10.1067/j.cpradiol.2024.10.035","url":null,"abstract":"<div><h3>Objective</h3><div>We examined the feasibility of collecting timely patient feedback after outpatient magnetic resonance imaging (MRI) and the effect of radiology staff responses or actions on patient experience scores.</div></div><div><h3>Methods</h3><div>This study included 6043 patients who completed a feedback survey via email after undergoing outpatient MRI at a tertiary care medical center between April 2021 and September 2022. The survey consisted of the question “How was your radiology visit?” with a 5-point emoji-Likert scale, an open-text feedback box, and an option to request a response. The primary outcome measure analyzed was the “top box” score (ie, the percentage of 5/5 scores) reflecting overall patient satisfaction. For comparison, Press Ganey quarterly top box scores from a separate group of patients who underwent outpatient MRI concurrent with the study period were also analyzed. Patient-reported feedback was categorized by using natural language processing and analyzed along with radiology staff responses and actions.</div></div><div><h3>Results</h3><div>The top box score for “How was your radiology visit?” increased from 81.1% during the first month of the study to 86.1% during the last month. Similarly, the comparative Press Ganey top box scores for questions related to “radiology staff concern for comfort” and “courtesy of radiology technologist” increased from the first quarter to the last quarter of the study. Patients reported service excellence in 59.2% of surveys (<em>n</em>=3576), long wait time in 6.3% (<em>n</em>=383), and poor communication in 6.1% (<em>n</em>=369). Some praise from patients was shared with staff members who interacted with the patients. Of all survey responses, 5.5% required radiology staff responses or actions, such as sharing feedback with supervisors, providing direct feedback to staff, and making telephone calls to patients. From the first half to the second half of the study, the median (IQR) wait time decreased from 46 (32–66) minutes to 45 (31–64) minutes (<em>P</em>=.02), and the percentage of patients who reported long wait time decreased from 7.4% to 5.4% (<em>P</em>=.002).</div></div><div><h3>Conclusion</h3><div>Our study highlights the feasibility of obtaining timely patient feedback after outpatient MRI and using it to improve patient experience. These results may contribute to the development of more patient-centered care in the field of radiology.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 3","pages":"Pages 369-376"},"PeriodicalIF":1.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bilateral internal carotid dissection: advocating for the use of the “googly eyes sign’’","authors":"Nikolaos-Achilleas Arkoudis MD, PhD , Georgios Velonakis MD, PhD","doi":"10.1067/j.cpradiol.2025.04.001","DOIUrl":"10.1067/j.cpradiol.2025.04.001","url":null,"abstract":"","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 4","pages":"Pages 526-527"},"PeriodicalIF":1.5,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144038648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite a decade of significant growth in economic conditions of South Asian countries, people continue to suffer from the pervading problem of malnutrition. High prevalence of child undernutrition despite unprecedented economic growth in these nations have a multifactorial etiology including fetal malnutrition and status of women, inadequate feeding practices in infant and young child, poor household sanitation and untargeted health schemes. The diagnosis and management of malnutrition and its various complications require a multidisciplinary approach and radiologists have a potentially important, albeit currently underutilized, role in early detection, identifying the other clinical mimics such as endocrinal and genetic disorders, and detection of key complications. In this review, we apprise the radiological aspects of PEM and micro-nutritional deficiency and their complications. We also provide a comprehensive structured evaluation scheme for evaluation of a suspected malnourished child.
{"title":"Radiological insights into pediatric undernutrition: Early detection, complications, and a structured evaluation approach","authors":"Ishan Kumar MBBS, MD, DNB , Ashish Verma MBBS, DNB, PhD , Priyanka Aggarwal MBBS, MD , Nidhi Yadav MBBS, MD , Karan Kukreja MBBS, MD , Pramod Kumar Singh MBBS, MD","doi":"10.1067/j.cpradiol.2025.03.002","DOIUrl":"10.1067/j.cpradiol.2025.03.002","url":null,"abstract":"<div><div>Despite a decade of significant growth in economic conditions of South Asian countries, people continue to suffer from the pervading problem of malnutrition. High prevalence of child undernutrition despite unprecedented economic growth in these nations have a multifactorial etiology including fetal malnutrition and status of women, inadequate feeding practices in infant and young child, poor household sanitation and untargeted health schemes. The diagnosis and management of malnutrition and its various complications require a multidisciplinary approach and radiologists have a potentially important, albeit currently underutilized, role in early detection, identifying the other clinical mimics such as endocrinal and genetic disorders, and detection of key complications. In this review, we apprise the radiological aspects of PEM and micro-nutritional deficiency and their complications. We also provide a comprehensive structured evaluation scheme for evaluation of a suspected malnourished child.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 5","pages":"Pages 616-626"},"PeriodicalIF":1.5,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-08DOI: 10.1067/j.cpradiol.2025.03.001
Negar Firoozeh MD , Sung Yoon Park MD , Yaw Nyame MD , Arash Mahdavi MD , Seyed Ali Nabipoorashrafi MD , Achille Mileto MD , Bahar Mansoori MD , Antonio C Westphalen MD, PhD
Objective
To compare Prostate Imaging Reporting and Data System (PI-RADS) scores derived from a standard multiparametric prostate MRI (mpMRI) protocol with those from a protocol consisting only of T2-weighted and dynamic contrast-enhanced images (T2+DCE MRI).
Methods
In this retrospective, single-center, cross-sectional study approved by the IRB and compliant with HIPAA, 492 MRI exams performed in 2022 were analyzed. PI-RADS scores from mpMRIs were extracted from medical records, and new scores were generated for T2+DCE MRI following PI-RADS guidelines. Score differences were evaluated using Wilcoxon signed-rank and McNemar's tests, stratified by lesion location (peripheral zone, PZ, and transition zone, TZ). Diagnostic accuracies of the two methods were compared using ROC curves, and logistic regression was employed to identify predictors of score changes.
Results
Notable differences in PI-RADS scores were observed were observed in the PZ (P = 0.03) and TZ (P < 0.001). 4.8 % of PZ and 4.0 % of TZ PI-RADS 3-5 lesions were misclassified as PI-RADS 1-2 on T2W+DCE MRI (PZ vs TZ, P = 0.64). However, ROC curve analyses revealed no significant difference in diagnostic accuracy between mpMRI (Az = 0.77) and T2+DCE MRI (Az = 0.75, P = 0.07). PSA density was identified as a predictor of score changes from PI-RADS 3-5 to 1-2, although the effect size was modest.
Conclusions
Although T2+DCE MRI yields different PI-RADS scores compared to mpMRI, the clinical impact on diagnostic accuracy and decision-making is overall small. This supports the continued use of T2+DCE MRI, particularly when diffusion-weighted imaging is compromised.
{"title":"Diagnostic impact of DWI absence on prostate lesion assessment using PI-RADS 2.1","authors":"Negar Firoozeh MD , Sung Yoon Park MD , Yaw Nyame MD , Arash Mahdavi MD , Seyed Ali Nabipoorashrafi MD , Achille Mileto MD , Bahar Mansoori MD , Antonio C Westphalen MD, PhD","doi":"10.1067/j.cpradiol.2025.03.001","DOIUrl":"10.1067/j.cpradiol.2025.03.001","url":null,"abstract":"<div><h3>Objective</h3><div>To compare Prostate Imaging Reporting and Data System (PI-RADS) scores derived from a standard multiparametric prostate MRI (mpMRI) protocol with those from a protocol consisting only of T2-weighted and dynamic contrast-enhanced images (T2+DCE MRI).</div></div><div><h3>Methods</h3><div>In this retrospective, single-center, cross-sectional study approved by the IRB and compliant with HIPAA, 492 MRI exams performed in 2022 were analyzed. PI-RADS scores from mpMRIs were extracted from medical records, and new scores were generated for T2+DCE MRI following PI-RADS guidelines. Score differences were evaluated using Wilcoxon signed-rank and McNemar's tests, stratified by lesion location (peripheral zone, PZ, and transition zone, TZ). Diagnostic accuracies of the two methods were compared using ROC curves, and logistic regression was employed to identify predictors of score changes.</div></div><div><h3>Results</h3><div>Notable differences in PI-RADS scores were observed were observed in the PZ (<em>P</em> = 0.03) and TZ (<em>P</em> < 0.001). 4.8 % of PZ and 4.0 % of TZ PI-RADS 3-5 lesions were misclassified as PI-RADS 1-2 on T2W+DCE MRI (PZ vs TZ, <em>P</em> = 0.64). However, ROC curve analyses revealed no significant difference in diagnostic accuracy between mpMRI (Az = 0.77) and T2+DCE MRI (Az = 0.75, <em>P</em> = 0.07). PSA density was identified as a predictor of score changes from PI-RADS 3-5 to 1-2, although the effect size was modest.</div></div><div><h3>Conclusions</h3><div>Although T2+DCE MRI yields different PI-RADS scores compared to mpMRI, the clinical impact on diagnostic accuracy and decision-making is overall small. This supports the continued use of T2+DCE MRI, particularly when diffusion-weighted imaging is compromised.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 5","pages":"Pages 596-602"},"PeriodicalIF":1.5,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-18DOI: 10.1067/j.cpradiol.2025.02.002
Zier Zhou , Arsalan Rizwan , Nick Rogoza , Andrew D Chung , Benjamin YM Kwan
Purpose
Recent competency-based medical education (CBME) implementation within Canadian radiology programs has required faculty to conduct more assessments. The rise of narrative feedback in CBME, coinciding with the rise of large language models (LLMs), raises questions about the potential of these models to generate informative comments matching human experts and associated challenges. This study compares human-written feedback to GPT-3.5-generated feedback for radiology residents, and how well raters can differentiate between these sources.
Methods
Assessments were completed by 28 faculty members for 10 residents within a Canadian Diagnostic Radiology program (2019–2023). Comments were extracted from Elentra, de-identified, and parsed into sentences, of which 110 were randomly selected for analysis. 11 of these comments were entered into GPT-3.5, generating 110 synthetic comments that were mixed with actual comments. Two faculty raters and GPT-3.5 read each comment to predict whether it was human-written or GPT-generated.
Results
Actual comments from humans were often longer and more specific than synthetic comments, especially when describing clinical procedures and patient interactions. Source differentiation was more difficult when both feedback types were similarly vague. Low agreement (k=-0.237) between responses provided by GPT-3.5 and humans was observed. Human raters were also more accurate (80.5 %) at identifying actual and synthetic comments than GPT-3.5 (50 %).
Conclusion
Currently, GPT-3.5 cannot match human experts in delivering specific, nuanced feedback for radiology residents. Compared to humans, GPT-3.5 also performs worse in distinguishing between actual and synthetic comments. These insights could guide the development of more sophisticated algorithms to produce higher-quality feedback, supporting faculty development.
{"title":"Differentiating between GPT-generated and human-written feedback for radiology residents","authors":"Zier Zhou , Arsalan Rizwan , Nick Rogoza , Andrew D Chung , Benjamin YM Kwan","doi":"10.1067/j.cpradiol.2025.02.002","DOIUrl":"10.1067/j.cpradiol.2025.02.002","url":null,"abstract":"<div><h3>Purpose</h3><div>Recent competency-based medical education (CBME) implementation within Canadian radiology programs has required faculty to conduct more assessments. The rise of narrative feedback in CBME, coinciding with the rise of large language models (LLMs), raises questions about the potential of these models to generate informative comments matching human experts and associated challenges. This study compares human-written feedback to GPT-3.5-generated feedback for radiology residents, and how well raters can differentiate between these sources.</div></div><div><h3>Methods</h3><div>Assessments were completed by 28 faculty members for 10 residents within a Canadian Diagnostic Radiology program (2019–2023). Comments were extracted from Elentra, de-identified, and parsed into sentences, of which 110 were randomly selected for analysis. 11 of these comments were entered into GPT-3.5, generating 110 synthetic comments that were mixed with actual comments. Two faculty raters and GPT-3.5 read each comment to predict whether it was human-written or GPT-generated.</div></div><div><h3>Results</h3><div>Actual comments from humans were often longer and more specific than synthetic comments, especially when describing clinical procedures and patient interactions. Source differentiation was more difficult when both feedback types were similarly vague. Low agreement (<em>k</em>=-0.237) between responses provided by GPT-3.5 and humans was observed. Human raters were also more accurate (80.5 %) at identifying actual and synthetic comments than GPT-3.5 (50 %).</div></div><div><h3>Conclusion</h3><div>Currently, GPT-3.5 cannot match human experts in delivering specific, nuanced feedback for radiology residents. Compared to humans, GPT-3.5 also performs worse in distinguishing between actual and synthetic comments. These insights could guide the development of more sophisticated algorithms to produce higher-quality feedback, supporting faculty development.</div></div>","PeriodicalId":51617,"journal":{"name":"Current Problems in Diagnostic Radiology","volume":"54 5","pages":"Pages 574-578"},"PeriodicalIF":1.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143473241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}