Daniel A Driscoll, Robert G Ricotti, Michael-Alexander Malahias, Allina A Nocon, Troy D Bornes, T David Tarity, Kathleen Tam, Ajay Premkumar, Wali U Pirzada, Friedrich Boettner, Peter K Sculco
{"title":"基于骨科培训水平的 Paprosky 髋臼骨缺损分类的可靠性和有效性。","authors":"Daniel A Driscoll, Robert G Ricotti, Michael-Alexander Malahias, Allina A Nocon, Troy D Bornes, T David Tarity, Kathleen Tam, Ajay Premkumar, Wali U Pirzada, Friedrich Boettner, Peter K Sculco","doi":"10.1007/s00402-024-05524-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Reliability and validity of the Paprosky classification for acetabular bone loss have been debated. Additionally, the relationship between surgeon training level and Paprosky classification accuracy/treatment selection is poorly defined. This study aimed to: (1) evaluate the validity of preoperative Paprosky classification/treatment selection compared to intraoperative classification/treatment selection and (2) evaluate the relationship between training level and intra-rater and inter-rater reliability of preoperative classification and treatment choice.</p><p><strong>Methods: </strong>Seventy-four patients with intraoperative Paprosky types [I (N = 24), II (N = 27), III (N = 23)] were selected. Six raters (Residents (N = 2), Fellows (N = 2), Attendings (N = 2)) independently provided Paprosky classification and treatment using preoperative radiographs. Graders reviewed images twice, 14 days apart. Cohen's Kappa was calculated for (1) inter-rater agreement of Paprosky classification/treatment by training level (2), intra-rater reliability, (3) preoperative and intraoperative classification agreement, and (4) preoperative treatment selection and actual treatment performed.</p><p><strong>Results: </strong>Inter-rater agreement between raters of the same training level was moderate (K range = 0.42-0.50), and mostly poor for treatment selection (K range = 0.02-0.44). Intra-rater agreement ranged from fair to good (K range = 0.40-0.73). Agreement between preoperative and intraoperative classifications was fair (K range = 0.25-0.36). Agreement between preoperative treatment selections and actual treatments was fair (K range = 0.21-0.39).</p><p><strong>Conclusion: </strong>Inter-rater reliability of Paprosky classification was poor to moderate for all training levels. Preoperative Paprosky classification showed fair agreement with intraoperative Paprosky grading. Treatment selections based on preoperative radiographs had fair agreement with actual treatments. Further research should investigate the role of advanced imaging and alternative classifications in evaluation of acetabular bone loss.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reliability and validity of the Paprosky classification for acetabular bone loss based on level of orthopedic training.\",\"authors\":\"Daniel A Driscoll, Robert G Ricotti, Michael-Alexander Malahias, Allina A Nocon, Troy D Bornes, T David Tarity, Kathleen Tam, Ajay Premkumar, Wali U Pirzada, Friedrich Boettner, Peter K Sculco\",\"doi\":\"10.1007/s00402-024-05524-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Reliability and validity of the Paprosky classification for acetabular bone loss have been debated. Additionally, the relationship between surgeon training level and Paprosky classification accuracy/treatment selection is poorly defined. This study aimed to: (1) evaluate the validity of preoperative Paprosky classification/treatment selection compared to intraoperative classification/treatment selection and (2) evaluate the relationship between training level and intra-rater and inter-rater reliability of preoperative classification and treatment choice.</p><p><strong>Methods: </strong>Seventy-four patients with intraoperative Paprosky types [I (N = 24), II (N = 27), III (N = 23)] were selected. Six raters (Residents (N = 2), Fellows (N = 2), Attendings (N = 2)) independently provided Paprosky classification and treatment using preoperative radiographs. Graders reviewed images twice, 14 days apart. Cohen's Kappa was calculated for (1) inter-rater agreement of Paprosky classification/treatment by training level (2), intra-rater reliability, (3) preoperative and intraoperative classification agreement, and (4) preoperative treatment selection and actual treatment performed.</p><p><strong>Results: </strong>Inter-rater agreement between raters of the same training level was moderate (K range = 0.42-0.50), and mostly poor for treatment selection (K range = 0.02-0.44). Intra-rater agreement ranged from fair to good (K range = 0.40-0.73). Agreement between preoperative and intraoperative classifications was fair (K range = 0.25-0.36). Agreement between preoperative treatment selections and actual treatments was fair (K range = 0.21-0.39).</p><p><strong>Conclusion: </strong>Inter-rater reliability of Paprosky classification was poor to moderate for all training levels. Preoperative Paprosky classification showed fair agreement with intraoperative Paprosky grading. Treatment selections based on preoperative radiographs had fair agreement with actual treatments. Further research should investigate the role of advanced imaging and alternative classifications in evaluation of acetabular bone loss.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00402-024-05524-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/9/23 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00402-024-05524-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/23 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
Reliability and validity of the Paprosky classification for acetabular bone loss based on level of orthopedic training.
Background: Reliability and validity of the Paprosky classification for acetabular bone loss have been debated. Additionally, the relationship between surgeon training level and Paprosky classification accuracy/treatment selection is poorly defined. This study aimed to: (1) evaluate the validity of preoperative Paprosky classification/treatment selection compared to intraoperative classification/treatment selection and (2) evaluate the relationship between training level and intra-rater and inter-rater reliability of preoperative classification and treatment choice.
Methods: Seventy-four patients with intraoperative Paprosky types [I (N = 24), II (N = 27), III (N = 23)] were selected. Six raters (Residents (N = 2), Fellows (N = 2), Attendings (N = 2)) independently provided Paprosky classification and treatment using preoperative radiographs. Graders reviewed images twice, 14 days apart. Cohen's Kappa was calculated for (1) inter-rater agreement of Paprosky classification/treatment by training level (2), intra-rater reliability, (3) preoperative and intraoperative classification agreement, and (4) preoperative treatment selection and actual treatment performed.
Results: Inter-rater agreement between raters of the same training level was moderate (K range = 0.42-0.50), and mostly poor for treatment selection (K range = 0.02-0.44). Intra-rater agreement ranged from fair to good (K range = 0.40-0.73). Agreement between preoperative and intraoperative classifications was fair (K range = 0.25-0.36). Agreement between preoperative treatment selections and actual treatments was fair (K range = 0.21-0.39).
Conclusion: Inter-rater reliability of Paprosky classification was poor to moderate for all training levels. Preoperative Paprosky classification showed fair agreement with intraoperative Paprosky grading. Treatment selections based on preoperative radiographs had fair agreement with actual treatments. Further research should investigate the role of advanced imaging and alternative classifications in evaluation of acetabular bone loss.