{"title":"神经内科客观结构化临床检查在不同专业间的相互可靠性。","authors":"Laura Mechtouff, Baptiste Balanca, Julien Jung, Julie Bourgeois-Vionnet, Chloé Dumot, Déborah Guery, Thiébaud Picart, Lionel Bapteste, Geneviève Demarquay, Alexandre Bani-Sadr, Lucie Rascle, Yves Berthezène, Timothée Jacquesson, Camille Amaz, Juliette Macabrey, Inès Ramos, Marie Viprey, Gilles Rode, Marion Cortet","doi":"10.1080/0142159X.2023.2244146","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To assess interrater reliability and examiners' characteristics, especially specialty, associated with scoring of neurology objective structured clinical examination (OSCE).</p><p><strong>Material and methods: </strong>During a neurology mock OSCE, five randomly chosen students volunteers were filmed while performing 1 of the 5 stations. Video recordings were scored by physicians from the Lyon and Clermont-Ferrand university teaching hospitals to assess students performance using both a checklist scoring and a global rating scale. Interrater reliability between examiners were assessed using intraclass coefficient correlation. Multivariable linear regression models including video recording as random effect dependent variable were performed to detect factors associated with scoring.</p><p><strong>Results: </strong>Thirty examiners including 15 (50%) neurologists participated. The intraclass correlation coefficient of checklist scores and global ratings between examiners were 0.71 (CI95% [0.45-0.95]) and 0.54 (CI95% [0.28-0.91]), respectively. In multivariable analyses, no factor was associated with checklist scores, while male gender of examiner was associated with lower global rating (<i>β</i> coefficient = -0.37; CI 95% [-0.62-0.11]).</p><p><strong>Conclusions: </strong>Our study demonstrated through a video-based scoring method that agreement among examiners was good using checklist scoring while moderate using global rating scale in neurology OSCE. Examiner's specialty did not affect scoring whereas gender was associated with global rating scale.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"239-244"},"PeriodicalIF":3.3000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interrater reliability in neurology objective structured clinical examination across specialties.\",\"authors\":\"Laura Mechtouff, Baptiste Balanca, Julien Jung, Julie Bourgeois-Vionnet, Chloé Dumot, Déborah Guery, Thiébaud Picart, Lionel Bapteste, Geneviève Demarquay, Alexandre Bani-Sadr, Lucie Rascle, Yves Berthezène, Timothée Jacquesson, Camille Amaz, Juliette Macabrey, Inès Ramos, Marie Viprey, Gilles Rode, Marion Cortet\",\"doi\":\"10.1080/0142159X.2023.2244146\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To assess interrater reliability and examiners' characteristics, especially specialty, associated with scoring of neurology objective structured clinical examination (OSCE).</p><p><strong>Material and methods: </strong>During a neurology mock OSCE, five randomly chosen students volunteers were filmed while performing 1 of the 5 stations. Video recordings were scored by physicians from the Lyon and Clermont-Ferrand university teaching hospitals to assess students performance using both a checklist scoring and a global rating scale. Interrater reliability between examiners were assessed using intraclass coefficient correlation. Multivariable linear regression models including video recording as random effect dependent variable were performed to detect factors associated with scoring.</p><p><strong>Results: </strong>Thirty examiners including 15 (50%) neurologists participated. The intraclass correlation coefficient of checklist scores and global ratings between examiners were 0.71 (CI95% [0.45-0.95]) and 0.54 (CI95% [0.28-0.91]), respectively. In multivariable analyses, no factor was associated with checklist scores, while male gender of examiner was associated with lower global rating (<i>β</i> coefficient = -0.37; CI 95% [-0.62-0.11]).</p><p><strong>Conclusions: </strong>Our study demonstrated through a video-based scoring method that agreement among examiners was good using checklist scoring while moderate using global rating scale in neurology OSCE. Examiner's specialty did not affect scoring whereas gender was associated with global rating scale.</p>\",\"PeriodicalId\":18643,\"journal\":{\"name\":\"Medical Teacher\",\"volume\":\" \",\"pages\":\"239-244\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Teacher\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1080/0142159X.2023.2244146\",\"RegionNum\":2,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/8/22 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Teacher","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0142159X.2023.2244146","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/22 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
Interrater reliability in neurology objective structured clinical examination across specialties.
Purpose: To assess interrater reliability and examiners' characteristics, especially specialty, associated with scoring of neurology objective structured clinical examination (OSCE).
Material and methods: During a neurology mock OSCE, five randomly chosen students volunteers were filmed while performing 1 of the 5 stations. Video recordings were scored by physicians from the Lyon and Clermont-Ferrand university teaching hospitals to assess students performance using both a checklist scoring and a global rating scale. Interrater reliability between examiners were assessed using intraclass coefficient correlation. Multivariable linear regression models including video recording as random effect dependent variable were performed to detect factors associated with scoring.
Results: Thirty examiners including 15 (50%) neurologists participated. The intraclass correlation coefficient of checklist scores and global ratings between examiners were 0.71 (CI95% [0.45-0.95]) and 0.54 (CI95% [0.28-0.91]), respectively. In multivariable analyses, no factor was associated with checklist scores, while male gender of examiner was associated with lower global rating (β coefficient = -0.37; CI 95% [-0.62-0.11]).
Conclusions: Our study demonstrated through a video-based scoring method that agreement among examiners was good using checklist scoring while moderate using global rating scale in neurology OSCE. Examiner's specialty did not affect scoring whereas gender was associated with global rating scale.
期刊介绍:
Medical Teacher provides accounts of new teaching methods, guidance on structuring courses and assessing achievement, and serves as a forum for communication between medical teachers and those involved in general education. In particular, the journal recognizes the problems teachers have in keeping up-to-date with the developments in educational methods that lead to more effective teaching and learning at a time when the content of the curriculum—from medical procedures to policy changes in health care provision—is also changing. The journal features reports of innovation and research in medical education, case studies, survey articles, practical guidelines, reviews of current literature and book reviews. All articles are peer reviewed.