{"title":"An Assessment of Artificial Intelligence Software Measurement of Physician Performance: Not Quite Ready for Prime Time","authors":"G. Sc","doi":"10.26420/austinjsurg.2018.1145","DOIUrl":null,"url":null,"abstract":"With an increase in hospital-based employment of plastic surgery graduates, understanding how hospitals evaluate physician performance is essential, as data could have important career implications. At our hospital (Loma Linda University Medical Center), the software program The CRIMSON Initiative is used to provide data related to physician performance. Upon review of this data, our Plastic Surgery department had a 30 days readmission rate that was higher than the hospital average. We were interested to see how accurate this number actually was, and so we closely examined the 30 days readmission rates for our department for an entire year. Using the CRIMSON Initiative software, two separate major searches were used (by “Attending Physician” and by “Performing Physician”). Searching either way revealed a 30 days readmission rate that was higher for our department than the hospital average. However, after manually sorting through each patient chart, there were cases that were not readmissions due to the patient’s plastic surgery. Once these cases were excluded from the original calculations, it brought our department’s readmission rate down to slightly lower than the hospital average for each search. This discrepancy highlights the margin of error of such automated physician performance programs, and brings to light pitfalls that physicians should be aware of concerning similar programs at their own institutions. As plastic surgery residency graduates are increasingly employed by hospitals, it would behoove them to be cognizant of this issue, and to be empowered to question the data being used to assess their performance.","PeriodicalId":91056,"journal":{"name":"Austin journal of surgery","volume":"322 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Austin journal of surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26420/austinjsurg.2018.1145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With an increase in hospital-based employment of plastic surgery graduates, understanding how hospitals evaluate physician performance is essential, as data could have important career implications. At our hospital (Loma Linda University Medical Center), the software program The CRIMSON Initiative is used to provide data related to physician performance. Upon review of this data, our Plastic Surgery department had a 30 days readmission rate that was higher than the hospital average. We were interested to see how accurate this number actually was, and so we closely examined the 30 days readmission rates for our department for an entire year. Using the CRIMSON Initiative software, two separate major searches were used (by “Attending Physician” and by “Performing Physician”). Searching either way revealed a 30 days readmission rate that was higher for our department than the hospital average. However, after manually sorting through each patient chart, there were cases that were not readmissions due to the patient’s plastic surgery. Once these cases were excluded from the original calculations, it brought our department’s readmission rate down to slightly lower than the hospital average for each search. This discrepancy highlights the margin of error of such automated physician performance programs, and brings to light pitfalls that physicians should be aware of concerning similar programs at their own institutions. As plastic surgery residency graduates are increasingly employed by hospitals, it would behoove them to be cognizant of this issue, and to be empowered to question the data being used to assess their performance.