Daniel A Driscoll, Robert G Ricotti, Michael-Alexander Malahias, Allina A Nocon, Troy D Bornes, T David Tarity, Kathleen Tam, Ajay Premkumar, Wali U Pirzada, Friedrich Boettner, Peter K Sculco
{"title":"Reliability and validity of the Paprosky classification for acetabular bone loss based on level of orthopedic training.","authors":"Daniel A Driscoll, Robert G Ricotti, Michael-Alexander Malahias, Allina A Nocon, Troy D Bornes, T David Tarity, Kathleen Tam, Ajay Premkumar, Wali U Pirzada, Friedrich Boettner, Peter K Sculco","doi":"10.1007/s00402-024-05524-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Reliability and validity of the Paprosky classification for acetabular bone loss have been debated. Additionally, the relationship between surgeon training level and Paprosky classification accuracy/treatment selection is poorly defined. This study aimed to: (1) evaluate the validity of preoperative Paprosky classification/treatment selection compared to intraoperative classification/treatment selection and (2) evaluate the relationship between training level and intra-rater and inter-rater reliability of preoperative classification and treatment choice.</p><p><strong>Methods: </strong>Seventy-four patients with intraoperative Paprosky types [I (N = 24), II (N = 27), III (N = 23)] were selected. Six raters (Residents (N = 2), Fellows (N = 2), Attendings (N = 2)) independently provided Paprosky classification and treatment using preoperative radiographs. Graders reviewed images twice, 14 days apart. Cohen's Kappa was calculated for (1) inter-rater agreement of Paprosky classification/treatment by training level (2), intra-rater reliability, (3) preoperative and intraoperative classification agreement, and (4) preoperative treatment selection and actual treatment performed.</p><p><strong>Results: </strong>Inter-rater agreement between raters of the same training level was moderate (K range = 0.42-0.50), and mostly poor for treatment selection (K range = 0.02-0.44). Intra-rater agreement ranged from fair to good (K range = 0.40-0.73). Agreement between preoperative and intraoperative classifications was fair (K range = 0.25-0.36). Agreement between preoperative treatment selections and actual treatments was fair (K range = 0.21-0.39).</p><p><strong>Conclusion: </strong>Inter-rater reliability of Paprosky classification was poor to moderate for all training levels. Preoperative Paprosky classification showed fair agreement with intraoperative Paprosky grading. Treatment selections based on preoperative radiographs had fair agreement with actual treatments. Further research should investigate the role of advanced imaging and alternative classifications in evaluation of acetabular bone loss.</p>","PeriodicalId":8326,"journal":{"name":"Archives of Orthopaedic and Trauma Surgery","volume":" ","pages":"4267-4273"},"PeriodicalIF":2.0000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of Orthopaedic and Trauma Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00402-024-05524-x","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/23 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Reliability and validity of the Paprosky classification for acetabular bone loss have been debated. Additionally, the relationship between surgeon training level and Paprosky classification accuracy/treatment selection is poorly defined. This study aimed to: (1) evaluate the validity of preoperative Paprosky classification/treatment selection compared to intraoperative classification/treatment selection and (2) evaluate the relationship between training level and intra-rater and inter-rater reliability of preoperative classification and treatment choice.
Methods: Seventy-four patients with intraoperative Paprosky types [I (N = 24), II (N = 27), III (N = 23)] were selected. Six raters (Residents (N = 2), Fellows (N = 2), Attendings (N = 2)) independently provided Paprosky classification and treatment using preoperative radiographs. Graders reviewed images twice, 14 days apart. Cohen's Kappa was calculated for (1) inter-rater agreement of Paprosky classification/treatment by training level (2), intra-rater reliability, (3) preoperative and intraoperative classification agreement, and (4) preoperative treatment selection and actual treatment performed.
Results: Inter-rater agreement between raters of the same training level was moderate (K range = 0.42-0.50), and mostly poor for treatment selection (K range = 0.02-0.44). Intra-rater agreement ranged from fair to good (K range = 0.40-0.73). Agreement between preoperative and intraoperative classifications was fair (K range = 0.25-0.36). Agreement between preoperative treatment selections and actual treatments was fair (K range = 0.21-0.39).
Conclusion: Inter-rater reliability of Paprosky classification was poor to moderate for all training levels. Preoperative Paprosky classification showed fair agreement with intraoperative Paprosky grading. Treatment selections based on preoperative radiographs had fair agreement with actual treatments. Further research should investigate the role of advanced imaging and alternative classifications in evaluation of acetabular bone loss.
期刊介绍:
"Archives of Orthopaedic and Trauma Surgery" is a rich source of instruction and information for physicians in clinical practice and research in the extensive field of orthopaedics and traumatology. The journal publishes papers that deal with diseases and injuries of the musculoskeletal system from all fields and aspects of medicine. The journal is particularly interested in papers that satisfy the information needs of orthopaedic clinicians and practitioners. The journal places special emphasis on clinical relevance.
"Archives of Orthopaedic and Trauma Surgery" is the official journal of the German Speaking Arthroscopy Association (AGA).