An Assessment of Artificial Intelligence Software Measurement of Physician Performance: Not Quite Ready for Prime Time

G. Sc
{"title":"An Assessment of Artificial Intelligence Software Measurement of Physician Performance: Not Quite Ready for Prime Time","authors":"G. Sc","doi":"10.26420/austinjsurg.2018.1145","DOIUrl":null,"url":null,"abstract":"With an increase in hospital-based employment of plastic surgery graduates, understanding how hospitals evaluate physician performance is essential, as data could have important career implications. At our hospital (Loma Linda University Medical Center), the software program The CRIMSON Initiative is used to provide data related to physician performance. Upon review of this data, our Plastic Surgery department had a 30 days readmission rate that was higher than the hospital average. We were interested to see how accurate this number actually was, and so we closely examined the 30 days readmission rates for our department for an entire year. Using the CRIMSON Initiative software, two separate major searches were used (by “Attending Physician” and by “Performing Physician”). Searching either way revealed a 30 days readmission rate that was higher for our department than the hospital average. However, after manually sorting through each patient chart, there were cases that were not readmissions due to the patient’s plastic surgery. Once these cases were excluded from the original calculations, it brought our department’s readmission rate down to slightly lower than the hospital average for each search. This discrepancy highlights the margin of error of such automated physician performance programs, and brings to light pitfalls that physicians should be aware of concerning similar programs at their own institutions. As plastic surgery residency graduates are increasingly employed by hospitals, it would behoove them to be cognizant of this issue, and to be empowered to question the data being used to assess their performance.","PeriodicalId":91056,"journal":{"name":"Austin journal of surgery","volume":"322 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Austin journal of surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26420/austinjsurg.2018.1145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With an increase in hospital-based employment of plastic surgery graduates, understanding how hospitals evaluate physician performance is essential, as data could have important career implications. At our hospital (Loma Linda University Medical Center), the software program The CRIMSON Initiative is used to provide data related to physician performance. Upon review of this data, our Plastic Surgery department had a 30 days readmission rate that was higher than the hospital average. We were interested to see how accurate this number actually was, and so we closely examined the 30 days readmission rates for our department for an entire year. Using the CRIMSON Initiative software, two separate major searches were used (by “Attending Physician” and by “Performing Physician”). Searching either way revealed a 30 days readmission rate that was higher for our department than the hospital average. However, after manually sorting through each patient chart, there were cases that were not readmissions due to the patient’s plastic surgery. Once these cases were excluded from the original calculations, it brought our department’s readmission rate down to slightly lower than the hospital average for each search. This discrepancy highlights the margin of error of such automated physician performance programs, and brings to light pitfalls that physicians should be aware of concerning similar programs at their own institutions. As plastic surgery residency graduates are increasingly employed by hospitals, it would behoove them to be cognizant of this issue, and to be empowered to question the data being used to assess their performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估人工智能软件测量医生的表现:还没有完全准备好黄金时间
随着整形外科毕业生在医院就业的增加,了解医院如何评估医生的表现是至关重要的,因为数据可能具有重要的职业含义。在我们医院(洛马林达大学医学中心),软件程序the CRIMSON Initiative被用来提供与医生表现相关的数据。根据这些数据,我们的整形外科有30天的再入院率,高于医院的平均水平。我们很想知道这个数字到底有多准确,所以我们仔细检查了我们部门一整年的30天再入院率。使用CRIMSON Initiative软件,使用了两个独立的主要搜索(“主治医生”和“主治医生”)。无论用哪一种方式搜索,我们部门的30天再入院率都高于医院的平均水平。然而,在手工整理每个患者的病历后,有一些病例由于患者的整形手术而没有再入院。一旦这些病例从最初的计算中排除,我们部门的再入院率就会下降到略低于医院每次搜索的平均水平。这种差异突出了这种自动化医生绩效项目的误差范围,并揭示了医生在自己机构中应该意识到的类似项目的陷阱。随着越来越多的整形外科住院医师毕业生被医院聘用,他们理应认识到这个问题,并有权对用于评估他们表现的数据提出质疑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Topic - Controlled Drug Release by Excipient Modification: A Review Solitary Fibrous Tumor of the Peritoneum: An Unusual Location of A Rare Tumor A Case of Retroperitoneal Panniculitis with Paralytic Ileus Mimiking a High Intestinal Obstruction Solid Pseudopapillary Tumor of the Pancreas with Node Invasion: What to do? A Case Report and Literature Review Management of an Uncommun Giant Ovarian Hydatid Cyst
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1