Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen
{"title":"2019 年冠状病毒疾病背景下胸片报告模板的使用:衡量放射科医生与三种国际模板的一致性。","authors":"Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen","doi":"10.1117/1.JMI.11.4.045504","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Reporting templates for chest radiographs (CXRs) for patients presenting or being clinically managed for severe acute respiratory syndrome coronavirus 2 [coronavirus disease 2019 (COVID-19)] has attracted advocacy from international radiology societies. We aim to explore the effectiveness and useability of three international templates through the concordance of, and between, radiologists reporting on the presence and severity of COVID-19 on CXRs.</p><p><strong>Approach: </strong>Seventy CXRs were obtained from a referral hospital, 50 from patients with COVID-19 (30 rated \"classic\" COVID-19 appearance and 20 \"indeterminate\") and 10 \"normal\" and 10 \"alternative pathology\" CXRs. The recruited radiologists were assigned to three test sets with the same CXRs but with different template orders. Each radiologist read their test set three times and assigned a classification to the CXR using the Royal Australian New Zealand College of Radiology (RANZCR), British Society of Thoracic Imaging (BSTI), and Modified COVID-19 Reporting and Data System (Dutch; mCO-RADS) templates. Inter-reader variability and intra-reader variability were measured using Fleiss' kappa coefficient.</p><p><strong>Results: </strong>Twelve Australian radiologists participated. The BSTI template had the highest inter-reader agreement (0.46; \"moderate\" agreement), followed by RANZCR (0.45) and mCO-RADS (0.32). Concordance was driven by strong agreement in \"normal\" and \"alternative\" classifications and was lowest for \"indeterminate.\" General consistency was observed across classifications and templates, with intra-reader variability ranging from \"good\" to \"very good\" for COVID-19 CXRs (0.61), \"normal\" CXRs (0.76), and \"alternative\" (0.68).</p><p><strong>Conclusions: </strong>Reporting templates may be useful in reducing variation among radiology reports, with intra-reader variability showing promise. Feasibility and implementation require a wider approach including referring and treating doctors plus the development of training packages for radiologists specific to the template being used.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045504"},"PeriodicalIF":1.9000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349612/pdf/","citationCount":"0","resultStr":"{\"title\":\"Use of reporting templates for chest radiographs in a coronavirus disease 2019 context: measuring concordance of radiologists with three international templates.\",\"authors\":\"Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen\",\"doi\":\"10.1117/1.JMI.11.4.045504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Reporting templates for chest radiographs (CXRs) for patients presenting or being clinically managed for severe acute respiratory syndrome coronavirus 2 [coronavirus disease 2019 (COVID-19)] has attracted advocacy from international radiology societies. We aim to explore the effectiveness and useability of three international templates through the concordance of, and between, radiologists reporting on the presence and severity of COVID-19 on CXRs.</p><p><strong>Approach: </strong>Seventy CXRs were obtained from a referral hospital, 50 from patients with COVID-19 (30 rated \\\"classic\\\" COVID-19 appearance and 20 \\\"indeterminate\\\") and 10 \\\"normal\\\" and 10 \\\"alternative pathology\\\" CXRs. The recruited radiologists were assigned to three test sets with the same CXRs but with different template orders. Each radiologist read their test set three times and assigned a classification to the CXR using the Royal Australian New Zealand College of Radiology (RANZCR), British Society of Thoracic Imaging (BSTI), and Modified COVID-19 Reporting and Data System (Dutch; mCO-RADS) templates. Inter-reader variability and intra-reader variability were measured using Fleiss' kappa coefficient.</p><p><strong>Results: </strong>Twelve Australian radiologists participated. The BSTI template had the highest inter-reader agreement (0.46; \\\"moderate\\\" agreement), followed by RANZCR (0.45) and mCO-RADS (0.32). Concordance was driven by strong agreement in \\\"normal\\\" and \\\"alternative\\\" classifications and was lowest for \\\"indeterminate.\\\" General consistency was observed across classifications and templates, with intra-reader variability ranging from \\\"good\\\" to \\\"very good\\\" for COVID-19 CXRs (0.61), \\\"normal\\\" CXRs (0.76), and \\\"alternative\\\" (0.68).</p><p><strong>Conclusions: </strong>Reporting templates may be useful in reducing variation among radiology reports, with intra-reader variability showing promise. Feasibility and implementation require a wider approach including referring and treating doctors plus the development of training packages for radiologists specific to the template being used.</p>\",\"PeriodicalId\":47707,\"journal\":{\"name\":\"Journal of Medical Imaging\",\"volume\":\"11 4\",\"pages\":\"045504\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349612/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1117/1.JMI.11.4.045504\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/28 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.11.4.045504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/28 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Use of reporting templates for chest radiographs in a coronavirus disease 2019 context: measuring concordance of radiologists with three international templates.
Purpose: Reporting templates for chest radiographs (CXRs) for patients presenting or being clinically managed for severe acute respiratory syndrome coronavirus 2 [coronavirus disease 2019 (COVID-19)] has attracted advocacy from international radiology societies. We aim to explore the effectiveness and useability of three international templates through the concordance of, and between, radiologists reporting on the presence and severity of COVID-19 on CXRs.
Approach: Seventy CXRs were obtained from a referral hospital, 50 from patients with COVID-19 (30 rated "classic" COVID-19 appearance and 20 "indeterminate") and 10 "normal" and 10 "alternative pathology" CXRs. The recruited radiologists were assigned to three test sets with the same CXRs but with different template orders. Each radiologist read their test set three times and assigned a classification to the CXR using the Royal Australian New Zealand College of Radiology (RANZCR), British Society of Thoracic Imaging (BSTI), and Modified COVID-19 Reporting and Data System (Dutch; mCO-RADS) templates. Inter-reader variability and intra-reader variability were measured using Fleiss' kappa coefficient.
Results: Twelve Australian radiologists participated. The BSTI template had the highest inter-reader agreement (0.46; "moderate" agreement), followed by RANZCR (0.45) and mCO-RADS (0.32). Concordance was driven by strong agreement in "normal" and "alternative" classifications and was lowest for "indeterminate." General consistency was observed across classifications and templates, with intra-reader variability ranging from "good" to "very good" for COVID-19 CXRs (0.61), "normal" CXRs (0.76), and "alternative" (0.68).
Conclusions: Reporting templates may be useful in reducing variation among radiology reports, with intra-reader variability showing promise. Feasibility and implementation require a wider approach including referring and treating doctors plus the development of training packages for radiologists specific to the template being used.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.