为基于模拟的耳镜检查技能测试收集有效性证据。

IF 1.3 4区 医学 Q3 OTORHINOLARYNGOLOGY Annals of Otology Rhinology and Laryngology Pub Date : 2024-10-17 DOI:10.1177/00034894241288434
Josefine Hastrup von Buchwald, Martin Frendø, Andreas Frithioff, Anders Britze, Thomas Winther Frederiksen, Jacob Melchiors, Steven Arild Wuyts Andersen
{"title":"为基于模拟的耳镜检查技能测试收集有效性证据。","authors":"Josefine Hastrup von Buchwald, Martin Frendø, Andreas Frithioff, Anders Britze, Thomas Winther Frederiksen, Jacob Melchiors, Steven Arild Wuyts Andersen","doi":"10.1177/00034894241288434","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick's framework.</p><p><strong>Methods: </strong>Novices and experienced otoscopists completed a test program on the Earsi otoscopy simulator. Automated assessment of diagnostic ability and performance were compared with manual ratings of technical skills. Reliability of assessment was evaluated using Generalizability theory. Linear mixed models and correlation analysis were used to compare automated and manual assessments. Finally, we used the contrasting groups method to define a pass/fail level for the automated score.</p><p><strong>Results: </strong>A total of 12 novices and 12 experienced otoscopists completed the study. We found an overall <i>G</i>-coefficient of .69 for automated assessment. The experienced otoscopists achieved a significantly higher mean automated score than the novices (59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]), <i>P</i> < .001). For the manual assessment of technical skills, there was no significant difference, nor did the automated score correlate with the manually rated score (Pearson's <i>r</i> = .20, <i>P</i> = .601). We established a pass/fail standard for the simulator's automated score of 49.3%.</p><p><strong>Conclusion: </strong>We explored validity evidence supporting an otoscopy simulator's automated score, demonstrating that this score mainly reflects cognitive skills. Manual assessment therefore still seems necessary at this point and external video-recording is necessary for valid assessment. To improve the reliability, the test course should include more cases to achieve a higher G-coefficient and a higher pass/fail standard should be used.</p>","PeriodicalId":50975,"journal":{"name":"Annals of Otology Rhinology and Laryngology","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gathering Validity Evidence for a Simulation-Based Test of Otoscopy Skills.\",\"authors\":\"Josefine Hastrup von Buchwald, Martin Frendø, Andreas Frithioff, Anders Britze, Thomas Winther Frederiksen, Jacob Melchiors, Steven Arild Wuyts Andersen\",\"doi\":\"10.1177/00034894241288434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick's framework.</p><p><strong>Methods: </strong>Novices and experienced otoscopists completed a test program on the Earsi otoscopy simulator. Automated assessment of diagnostic ability and performance were compared with manual ratings of technical skills. Reliability of assessment was evaluated using Generalizability theory. Linear mixed models and correlation analysis were used to compare automated and manual assessments. Finally, we used the contrasting groups method to define a pass/fail level for the automated score.</p><p><strong>Results: </strong>A total of 12 novices and 12 experienced otoscopists completed the study. We found an overall <i>G</i>-coefficient of .69 for automated assessment. The experienced otoscopists achieved a significantly higher mean automated score than the novices (59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]), <i>P</i> < .001). For the manual assessment of technical skills, there was no significant difference, nor did the automated score correlate with the manually rated score (Pearson's <i>r</i> = .20, <i>P</i> = .601). We established a pass/fail standard for the simulator's automated score of 49.3%.</p><p><strong>Conclusion: </strong>We explored validity evidence supporting an otoscopy simulator's automated score, demonstrating that this score mainly reflects cognitive skills. Manual assessment therefore still seems necessary at this point and external video-recording is necessary for valid assessment. To improve the reliability, the test course should include more cases to achieve a higher G-coefficient and a higher pass/fail standard should be used.</p>\",\"PeriodicalId\":50975,\"journal\":{\"name\":\"Annals of Otology Rhinology and Laryngology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Otology Rhinology and Laryngology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/00034894241288434\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"OTORHINOLARYNGOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Otology Rhinology and Laryngology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/00034894241288434","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:耳镜检查是多种医疗服务提供者使用的一项重要临床检查,但耳镜检查技能的培训和测试在很大程度上仍未得到研究。基于模拟器的耳镜检查技能评估已经存在,但有关其有效性的证据却很少。在这项研究中,我们根据梅西克的框架,通过收集有效性证据,探索了耳内镜模拟器的自动评估和性能指标:方法:新手和有经验的耳镜医师在Earsi耳镜模拟器上完成测试程序。诊断能力和表现的自动评估与技术技能的人工评分进行了比较。使用概括性理论评估了评估的可靠性。我们使用线性混合模型和相关分析来比较自动评估和人工评估。最后,我们使用了对比组方法来确定自动评分的及格/不及格水平:共有 12 名新手和 12 名经验丰富的耳镜医师完成了这项研究。我们发现自动评估的总体 G 系数为 0.69。经验丰富的耳镜医师获得的平均自动评分明显高于新手(59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]), P r = .20, P = .601)。我们确定模拟器自动评分的通过/未通过标准为 49.3%:我们探讨了支持耳内镜模拟器自动评分的有效性证据,证明该评分主要反映认知技能。因此,目前看来仍有必要进行人工评估,而且外部视频记录也是有效评估的必要条件。为提高可靠性,测试课程应包括更多病例,以达到更高的 G 系数,并应采用更高的及格/不及格标准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Gathering Validity Evidence for a Simulation-Based Test of Otoscopy Skills.

Objective: Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick's framework.

Methods: Novices and experienced otoscopists completed a test program on the Earsi otoscopy simulator. Automated assessment of diagnostic ability and performance were compared with manual ratings of technical skills. Reliability of assessment was evaluated using Generalizability theory. Linear mixed models and correlation analysis were used to compare automated and manual assessments. Finally, we used the contrasting groups method to define a pass/fail level for the automated score.

Results: A total of 12 novices and 12 experienced otoscopists completed the study. We found an overall G-coefficient of .69 for automated assessment. The experienced otoscopists achieved a significantly higher mean automated score than the novices (59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]), P < .001). For the manual assessment of technical skills, there was no significant difference, nor did the automated score correlate with the manually rated score (Pearson's r = .20, P = .601). We established a pass/fail standard for the simulator's automated score of 49.3%.

Conclusion: We explored validity evidence supporting an otoscopy simulator's automated score, demonstrating that this score mainly reflects cognitive skills. Manual assessment therefore still seems necessary at this point and external video-recording is necessary for valid assessment. To improve the reliability, the test course should include more cases to achieve a higher G-coefficient and a higher pass/fail standard should be used.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.10
自引率
7.10%
发文量
171
审稿时长
4-8 weeks
期刊介绍: The Annals of Otology, Rhinology & Laryngology publishes original manuscripts of clinical and research importance in otolaryngology–head and neck medicine and surgery, otology, neurotology, bronchoesophagology, laryngology, rhinology, head and neck oncology and surgery, plastic and reconstructive surgery, pediatric otolaryngology, audiology, and speech pathology. In-depth studies (supplements), papers of historical interest, and reviews of computer software and applications in otolaryngology are also published, as well as imaging, pathology, and clinicopathology studies, book reviews, and letters to the editor. AOR is the official journal of the American Broncho-Esophagological Association.
期刊最新文献
Management of a Piriform Sinus Fistula With Chronic Neck Infection in an Adult. Incidental Finding of Double Posterior Belly of Digastric Muscle in Head and Neck Cancer Patient. Quality of Life After Pediatric Tympanomastoidectomy. Beyond Morbidity and Mortality Conference: How Do We Learn From Special Cases? Atypical Presentation and Postoperative Management of Vagal Nerve Tumors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1