Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.06.019
Expected to grow at a 5.5% compound annual growth rate and reach a market of $34.6 billion by 2028, the diagnostic radiology market is an innovation powerhouse, in significant part due to artificial intelligence and digital products. Many radiologists, researchers, technologists, and leaders possess the skills to develop cutting-edge innovations to improve patient care. However, invariably funding is needed to bring these innovations to fruition. Here we describe, from the vantage point of a practicing venture partner, the key considerations, criteria, and frameworks used when making decisions of what, when, and who to invest funding in. We also describe the current funding climate for these innovations.
{"title":"Investing in Artificial Intelligence and Digital Health—What Radiology Innovators Need to Know","authors":"","doi":"10.1016/j.jacr.2024.06.019","DOIUrl":"10.1016/j.jacr.2024.06.019","url":null,"abstract":"<div><div>Expected to grow at a 5.5% compound annual growth rate and reach a market of $34.6 billion by 2028, the diagnostic radiology market is an innovation powerhouse, in significant part due to artificial intelligence and digital products. Many radiologists, researchers, technologists, and leaders possess the skills to develop cutting-edge innovations to improve patient care. However, invariably funding is needed to bring these innovations to fruition. Here we describe, from the vantage point of a practicing venture partner, the key considerations, criteria, and frameworks used when making decisions of what, when, and who to invest funding in. We also describe the current funding climate for these innovations.</div></div>","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.04.030
Introduction
Prostate MRI reports use standardized language to describe risk of clinically significant prostate cancer (csPCa) from “equivocal” (Prostate Imaging Reporting and Data System [PI-RADS] 3), “likely” (PI-RADS 4), to “highly likely” (PI-RADS 5). These terms correspond to risks of 11%, 37%, and 70% according to American Urological Association guidelines, respectively. We assessed how men perceive risk associated with standardized PI-RADS language.
Methodology
We conducted a crowdsourced survey of 1,204 men matching a US prostate cancer demographic. We queried participants’ risk perception associated with standardized PI-RADS language across increasing contexts: words only, PI-RADS sentence, full report, and full report with numeric estimate. Median perceived risk (interquartile range) and absolute under/overestimation compared with American Urological Association standards were reported. Multivariable linear mixed-effects analysis identified factors associated with accuracy of risk perception.
Results
Median perceived risks of csPCa (interquartile range) for the word-only context were “equivocal” 50% (50%-74%), “likely” 75% (68%-85%), and “highly likely” 87% (78%-92%), corresponding to +39%, +38%, and +17% overestimation, respectively. Median perceived risks for the PI-RADS-sentence context were 50% (50%-50%), 75% (68%-81%), and 90% (80%-94%) for PI-RADS 3, 4, and 5, corresponding to +39%, +38%, and +20% overestimation, respectively. Median perceived risks for the full-report context were 50% (35%-70%), 72% (50%-80%), and 84% (54%-91%) for PI-RADS 3, 4, and 5, corresponding to +39%, +35%, and +14% overestimation, respectively. For the full-report-with-numeric-estimate context describing a PI-RADS 4 lesion, median perceived risk was 70% (50%-%80), corresponding to +33% overestimation. Including numeric estimates increased correct perception of risk from 3% to 11% (P < .001), driven by men with higher numeracy (odds ratio 1.24, P = .04).
Conclusion
Men overestimate risk of csPCa associated with standardized PI-RADS language regardless of context, especially for PI-RADS 3 and 4 lesions. Changes to PI-RADS language or data-sharing policies for imaging reports should be considered.
{"title":"Patient Perceptions of Standardized Risk Language Used in ACR Prostate MRI PI-RADS Scores","authors":"","doi":"10.1016/j.jacr.2024.04.030","DOIUrl":"10.1016/j.jacr.2024.04.030","url":null,"abstract":"<div><h3>Introduction</h3><div>Prostate MRI reports use standardized language to describe risk of clinically significant prostate cancer (csPCa) from “equivocal” (Prostate Imaging Reporting and Data System [PI-RADS] 3), “likely” (PI-RADS 4), to “highly likely” (PI-RADS 5). These terms correspond to risks of 11%, 37%, and 70% according to American Urological Association guidelines, respectively. We assessed how men perceive risk associated with standardized PI-RADS language.</div></div><div><h3>Methodology</h3><div>We conducted a crowdsourced survey of 1,204 men matching a US prostate cancer demographic. We queried participants’ risk perception associated with standardized PI-RADS language across increasing contexts: words only, PI-RADS sentence, full report, and full report with numeric estimate. Median perceived risk (interquartile range) and absolute under/overestimation compared with American Urological Association standards were reported. Multivariable linear mixed-effects analysis identified factors associated with accuracy of risk perception.</div></div><div><h3>Results</h3><div>Median perceived risks of csPCa (interquartile range) for the word-only context were “equivocal” 50% (50%-74%), “likely” 75% (68%-85%), and “highly likely” 87% (78%-92%), corresponding to +39%, +38%, and +17% overestimation, respectively. Median perceived risks for the PI-RADS-sentence context were 50% (50%-50%), 75% (68%-81%), and 90% (80%-94%) for PI-RADS 3, 4, and 5, corresponding to +39%, +38%, and +20% overestimation, respectively. Median perceived risks for the full-report context were 50% (35%-70%), 72% (50%-80%), and 84% (54%-91%) for PI-RADS 3, 4, and 5, corresponding to +39%, +35%, and +14% overestimation, respectively. For the full-report-with-numeric-estimate context describing a PI-RADS 4 lesion, median perceived risk was 70% (50%-%80), corresponding to +33% overestimation. Including numeric estimates increased correct perception of risk from 3% to 11% (<em>P</em> < .001), driven by men with higher numeracy (odds ratio 1.24, <em>P</em> = .04).</div></div><div><h3>Conclusion</h3><div>Men overestimate risk of csPCa associated with standardized PI-RADS language regardless of context, especially for PI-RADS 3 and 4 lesions. Changes to PI-RADS language or data-sharing policies for imaging reports should be considered.</div></div>","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.06.010
{"title":"Building a Career in Radiology Innovation: A Primer for Trainees and First-Time Innovators to Act on Opportunities","authors":"","doi":"10.1016/j.jacr.2024.06.010","DOIUrl":"10.1016/j.jacr.2024.06.010","url":null,"abstract":"","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141473281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.07.007
{"title":"Evolving With Artificial Intelligence: Integrating Artificial Intelligence and Imaging Informatics in a General Residency Curriculum With an Advanced Track","authors":"","doi":"10.1016/j.jacr.2024.07.007","DOIUrl":"10.1016/j.jacr.2024.07.007","url":null,"abstract":"","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.04.001
{"title":"Medical School House Rock: Randomized Trial","authors":"","doi":"10.1016/j.jacr.2024.04.001","DOIUrl":"10.1016/j.jacr.2024.04.001","url":null,"abstract":"","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140783021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.04.011
Purpose
The National Resident Matching Program (NRMP) is used by an increasing number of diagnostic radiology (DR) residents applying to subspecialty fellowships. Data characterizing match outcomes on the basis of program characteristics are limited. The aim of this study was to determine if fellowship or residency size, location, or perceived reputation was related with a program filling its quota.
Methods
Using public NRMP data from 2004 to 2022, DR residency, breast imaging (BI), musculoskeletal imaging (MSK), interventional radiology (IR), and neuroradiology (NR) fellowship programs were characterized by geography, DR and fellowship quota, applicants per position (A/P), and reputation as determined by being an Aunt Minnie best DR program semifinalist, Doximity 2021-2022 top 25 program, or U.S. News & World Report top 20 hospital. The DR program’s reputation was substituted for fellowships at the same institution. A program was considered filled if it met its quota.
Results
The 2022 A/P ratios were 1.02 for IR, 0.83 for BI, 0.75 for MSK, and 0.88 for NR. IR was excluded from additional analysis because its A/P was >1. The combined BI, MSK, and NR fellowships filled 78% of positions (529 of 679) and 56% of programs (132 of 234). Factors associated with higher program filling included Doximity top 25 program, Aunt Minnie semifinalist, and U.S. News & World Report top 20 hospital affiliation (P < .001 for all); DR residency quota greater than 9, and fellowship quota of three or more (P < .01). The Ohio Valley (Ohio, western Pennsylvania, West Virginia, and Kentucky) filled the lowest, at 39% of programs (P = .06).
Conclusions
Larger fellowship programs with higher perceived reputations and larger underlying DR residency programs were significantly more likely to fill their NRMP quota.
目的:越来越多的放射诊断学(DR)住院医师在申请亚专科研究金时使用了国家住院医师匹配计划(NRMP)。根据项目特征描述匹配结果的数据非常有限。我们试图确定研究员或住院医师的规模、地点或声誉是否与项目是否完成配额有关:我们利用 2004-2022 年的 NRMP 公开数据,按照地理位置、DR 和研究金配额、每个职位的申请人数(A/P)以及声誉(由 "米妮阿姨最佳 DR 项目半决赛"、"Doximity 2021-2022 年 25 强 "或 "美国世界新闻与报道(USWNR)顶级医院 "决定)对 DR 住院医师、乳腺成像(BI)、肌肉骨骼(MSK)、介入(IR)和神经放射(NR)研究金项目进行了分析。DR 项目的声誉以同一机构的研究金为替代。如果一个项目达到了配额,则认为该项目已满:2022年的A/P比率分别为1.02(IR)、0.83(BI)、0.75(MSK)和0.88(NR)。BI、MSK和NR奖学金合计填补了78%(529/679)的职位和56%(132/234)的项目。与较高职位填补率相关的因素包括Doximity前25名、Minnie姑妈半决赛选手、USWNR前20名(P均为9)以及研究金配额大于3(P=结论:规模较大、声誉较高的研究金项目和规模较大的基础 DR 住院医师项目更有可能完成其 NRMP 配额。
{"title":"Analysis of National Resident Matching Program for Radiology Fellowships: Factors Affecting Program Fill Rates","authors":"","doi":"10.1016/j.jacr.2024.04.011","DOIUrl":"10.1016/j.jacr.2024.04.011","url":null,"abstract":"<div><h3>Purpose</h3><div>The National Resident Matching Program (NRMP) is used by an increasing number of diagnostic radiology (DR) residents applying to subspecialty fellowships. Data characterizing match outcomes on the basis of program characteristics are limited. The aim of this study was to determine if fellowship or residency size, location, or perceived reputation was related with a program filling its quota.</div></div><div><h3>Methods</h3><div><span><span><span><span>Using public NRMP data from 2004 to 2022, DR residency, breast imaging (BI), </span>musculoskeletal imaging<span> (MSK), interventional radiology (IR), and </span></span>neuroradiology (NR) fellowship programs were characterized by </span>geography, DR and fellowship quota, applicants per position (A/P), and reputation as determined by being an Aunt Minnie best DR program semifinalist, Doximity 2021-2022 top 25 program, or </span><span><em>U.S.</em><em> News & World Report</em></span> top 20 hospital. The DR program’s reputation was substituted for fellowships at the same institution. A program was considered filled if it met its quota.</div></div><div><h3>Results</h3><div>The 2022 A/P ratios were 1.02 for IR, 0.83 for BI, 0.75 for MSK, and 0.88 for NR. IR was excluded from additional analysis because its A/P was >1. The combined BI, MSK, and NR fellowships filled 78% of positions (529 of 679) and 56% of programs (132 of 234). Factors associated with higher program filling included Doximity top 25 program, Aunt Minnie semifinalist, and <em>U.S. News & World Report</em> top 20 hospital affiliation (<em>P</em> < .001 for all); DR residency quota greater than 9, and fellowship quota of three or more (<em>P</em> < .01). The Ohio Valley (Ohio, western Pennsylvania, West Virginia, and Kentucky) filled the lowest, at 39% of programs (<em>P</em> = .06).</div></div><div><h3>Conclusions</h3><div>Larger fellowship programs with higher perceived reputations and larger underlying DR residency programs were significantly more likely to fill their NRMP quota.</div></div>","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140892995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2023.08.039
{"title":"Building Bridges: Future-Proofing Established Industries and Building Relationships with the Black Community","authors":"","doi":"10.1016/j.jacr.2023.08.039","DOIUrl":"10.1016/j.jacr.2023.08.039","url":null,"abstract":"","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41161777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2023.08.040
{"title":"Addressing Mental Health in Professional Management","authors":"","doi":"10.1016/j.jacr.2023.08.040","DOIUrl":"10.1016/j.jacr.2023.08.040","url":null,"abstract":"","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41154013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.04.027
With promising artificial intelligence (AI) algorithms receiving FDA clearance, the potential impact of these models on clinical outcomes must be evaluated locally before their integration into routine workflows. Robust validation infrastructures are pivotal to inspecting the accuracy and generalizability of these deep learning algorithms to ensure both patient safety and health equity. Protected health information concerns, intellectual property rights, and diverse requirements of models impede the development of rigorous external validation infrastructures. The authors propose various suggestions for addressing the challenges associated with the development of efficient, customizable, and cost-effective infrastructures for the external validation of AI models at large medical centers and institutions. The authors present comprehensive steps to establish an AI inferencing infrastructure outside clinical systems to examine the local performance of AI algorithms before health practice or systemwide implementation and promote an evidence-based approach for adopting AI models that can enhance radiology workflows and improve patient outcomes.
{"title":"Establishing a Validation Infrastructure for Imaging-Based Artificial Intelligence Algorithms Before Clinical Implementation","authors":"","doi":"10.1016/j.jacr.2024.04.027","DOIUrl":"10.1016/j.jacr.2024.04.027","url":null,"abstract":"<div><div>With promising artificial intelligence (AI) algorithms receiving FDA clearance, the potential impact of these models on clinical outcomes must be evaluated locally before their integration into routine workflows. Robust validation infrastructures are pivotal to inspecting the accuracy and generalizability of these deep learning algorithms to ensure both patient safety and health equity. Protected health information concerns, intellectual property rights, and diverse requirements of models impede the development of rigorous external validation infrastructures. The authors propose various suggestions for addressing the challenges associated with the development of efficient, customizable, and cost-effective infrastructures for the external validation of AI models at large medical centers and institutions. The authors present comprehensive steps to establish an AI inferencing infrastructure outside clinical systems to examine the local performance of AI algorithms before health practice or systemwide implementation and promote an evidence-based approach for adopting AI models that can enhance radiology workflows and improve patient outcomes.</div></div>","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.jacr.2024.06.014
Purpose
We compared the performance of generative artificial intelligence (AI) (Augmented Transformer Assisted Radiology Intelligence [ATARI, Microsoft Nuance, Microsoft Corporation, Redmond, Washington]) and natural language processing (NLP) tools for identifying laterality errors in radiology reports and images.
Methods
We used an NLP-based (mPower, Microsoft Nuance) tool to identify radiology reports flagged for laterality errors in its Quality Assurance Dashboard. The NLP model detects and highlights laterality mismatches in radiology reports. From an initial pool of 1,124 radiology reports flagged by the NLP for laterality errors, we selected and evaluated 898 reports that encompassed radiography, CT, MRI, and ultrasound modalities to ensure comprehensive coverage. A radiologist reviewed each radiology report to assess if the flagged laterality errors were present (reporting error—true-positive) or absent (NLP error—false-positive). Next, we applied ATARI to 237 radiology reports and images with consecutive NLP true-positive (118 reports) and false-positive (119 reports) laterality errors. We estimated accuracy of NLP and generative AI tools to identify overall and modality-wise laterality errors.
Results
Among the 898 NLP-flagged laterality errors, 64% (574 of 898) had NLP errors and 36% (324 of 898) were reporting errors. The text query ATARI feature correctly identified the absence of laterality mismatch (NLP false-positives) with a 97.4% accuracy (115 of 118 reports; 95% confidence interval [CI] = 96.5%-98.3%). Combined vision and text query resulted in 98.3% accuracy (116 of 118 reports or images; 95% CI = 97.6%-99.0%), and query alone had a 98.3% accuracy (116 of 118 images; 95% CI = 97.6%-99.0%).
Conclusion
The generative AI-empowered ATARI prototype outperformed the assessed NLP tool for determining true and false laterality errors in radiology reports while enabling an image-based laterality determination. Underlying errors in ATARI text query in complex radiology reports emphasize the need for further improvement in the technology.
{"title":"Assessing Laterality Errors in Radiology: Comparing Generative Artificial Intelligence and Natural Language Processing","authors":"","doi":"10.1016/j.jacr.2024.06.014","DOIUrl":"10.1016/j.jacr.2024.06.014","url":null,"abstract":"<div><h3>Purpose</h3><div>We compared the performance of generative artificial intelligence (AI) (Augmented Transformer Assisted Radiology Intelligence [ATARI, Microsoft Nuance, Microsoft Corporation, Redmond, Washington]) and natural language processing (NLP) tools for identifying laterality errors in radiology reports and images.</div></div><div><h3>Methods</h3><div>We used an NLP-based (mPower, Microsoft Nuance) tool to identify radiology reports flagged for laterality errors in its Quality Assurance Dashboard. The NLP model detects and highlights laterality mismatches in radiology reports. From an initial pool of 1,124 radiology reports flagged by the NLP for laterality errors, we selected and evaluated 898 reports that encompassed radiography, CT, MRI, and ultrasound modalities to ensure comprehensive coverage. A radiologist reviewed each radiology report to assess if the flagged laterality errors were present (reporting error—true-positive) or absent (NLP error—false-positive). Next, we applied ATARI to 237 radiology reports and images with consecutive NLP true-positive (118 reports) and false-positive (119 reports) laterality errors. We estimated accuracy of NLP and generative AI tools to identify overall and modality-wise laterality errors.</div></div><div><h3>Results</h3><div>Among the 898 NLP-flagged laterality errors, 64% (574 of 898) had NLP errors and 36% (324 of 898) were reporting errors. The text query ATARI feature correctly identified the absence of laterality mismatch (NLP false-positives) with a 97.4% accuracy (115 of 118 reports; 95% confidence interval [CI] = 96.5%-98.3%). Combined vision and text query resulted in 98.3% accuracy (116 of 118 reports or images; 95% CI = 97.6%-99.0%), and query alone had a 98.3% accuracy (116 of 118 images; 95% CI = 97.6%-99.0%).</div></div><div><h3>Conclusion</h3><div>The generative AI-empowered ATARI prototype outperformed the assessed NLP tool for determining true and false laterality errors in radiology reports while enabling an image-based laterality determination. Underlying errors in ATARI text query in complex radiology reports emphasize the need for further improvement in the technology.</div></div>","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141499751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}