Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000983
Achim Hochlehnert, Jobst-Hendrik Schultz, Andreas Möltner, Sevgi Tımbıl, Konstantin Brass, Jana Jünger
Background: Objective Structured Clinical Examinations (OSCEs) often involve a considerable amount of resources in terms of materials and organization since the scores are often recorded on paper. Computer-assisted administration is an alternative with which the need for material resources can be reduced. In particular, the use of tablets seems sensible because these are easy to transport and flexible to use.
Aim: User acceptance concerning the use of tablets during OSCEs has not yet been extensively investigated. The aim of this study was to evaluate tablet-based OSCEs from the perspective of the user (examiner) and the student examinee.
Method: For two OSCEs in Internal Medicine at the University of Heidelberg, user acceptance was analyzed regarding tablet-based administration (satisfaction with functionality) and the subjective amount of effort as perceived by the examiners. Standardized questionnaires and semi-standardized interviews were conducted (complete survey of all participating examiners). In addition, for one OSCE, the subjective evaluation of this mode of assessment was gathered from a random sample of participating students in semi-standardized interviews.
Results: Overall, the examiners were very satisfied with using tablets during the assessment. The subjective amount of effort to use the tablet was found on average to be "hardly difficult". The examiners identified the advantages of this mode of administration as being in particular the ease of use and low rate of error. During the interviews of the examinees, acceptance for the use of tablets during the assessment was also detected.
Discussion: Overall, it was found that the use of tablets during OSCEs was well accepted by both examiners and examinees. We expect that this mode of assessment also offers advantages regarding assessment documentation, use of resources, and rate of error in comparison with paper-based assessments; all of these aspects should be followed up on in further studies.
{"title":"Electronic acquisition of OSCE performance using tablets.","authors":"Achim Hochlehnert, Jobst-Hendrik Schultz, Andreas Möltner, Sevgi Tımbıl, Konstantin Brass, Jana Jünger","doi":"10.3205/zma000983","DOIUrl":"https://doi.org/10.3205/zma000983","url":null,"abstract":"<p><strong>Background: </strong>Objective Structured Clinical Examinations (OSCEs) often involve a considerable amount of resources in terms of materials and organization since the scores are often recorded on paper. Computer-assisted administration is an alternative with which the need for material resources can be reduced. In particular, the use of tablets seems sensible because these are easy to transport and flexible to use.</p><p><strong>Aim: </strong>User acceptance concerning the use of tablets during OSCEs has not yet been extensively investigated. The aim of this study was to evaluate tablet-based OSCEs from the perspective of the user (examiner) and the student examinee.</p><p><strong>Method: </strong>For two OSCEs in Internal Medicine at the University of Heidelberg, user acceptance was analyzed regarding tablet-based administration (satisfaction with functionality) and the subjective amount of effort as perceived by the examiners. Standardized questionnaires and semi-standardized interviews were conducted (complete survey of all participating examiners). In addition, for one OSCE, the subjective evaluation of this mode of assessment was gathered from a random sample of participating students in semi-standardized interviews.</p><p><strong>Results: </strong>Overall, the examiners were very satisfied with using tablets during the assessment. The subjective amount of effort to use the tablet was found on average to be \"hardly difficult\". The examiners identified the advantages of this mode of administration as being in particular the ease of use and low rate of error. During the interviews of the examinees, acceptance for the use of tablets during the assessment was also detected.</p><p><strong>Discussion: </strong>Overall, it was found that the use of tablets during OSCEs was well accepted by both examiners and examinees. We expect that this mode of assessment also offers advantages regarding assessment documentation, use of resources, and rate of error in comparison with paper-based assessments; all of these aspects should be followed up on in further studies.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc41"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606489/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34102163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000988
Stefan Wagener, Andreas Möltner, Sevgi Tımbıl, Maryna Gornostayeva, Jobst-Hendrik Schultz, Peter Brüstle, Daniela Mohr, Anna Vander Beken, Julian Better, Martin Fries, Marc Gottschalk, Janine Günther, Laura Herrmann, Christian Kreisel, Tobias Moczko, Claudius Illg, Adam Jassowicz, Andreas Müller, Moritz Niesert, Felix Strübing, Jana Jünger
Introduction: Progress tests provide students feedback on their level of proficiency over the course of their medical studies. Peer-assisted learning and competency-based education have become increasingly important in medical education. Although progress tests have been proven to be useful as a longitudinal feedback instrument, there are currently no progress tests that have been created in cooperation with students or that focus on competency in medical education. In this study, we investigated the extent to which students can be included in the development of a progress test and demonstrated that aspects of knowledge related to competency can be represented on a competency-based progress test.
Methods: A two-dimensional blueprint for 144 multiple-choice questions (MCQs) covering groups of medical subjects and groups of competency areas was generated by three expert groups for developing the competency-based progress test. A total of 31 students from seven medical schools in Germany actively participated in this exercise. After completing an intensive and comprehensive training programme, the students generated and reviewed the test questions for the competency-based progress test using a separate platform of the ItemManagementSystem (IMS). This test was administered as a formative test to 469 students in a pilot study in November 2013 at eight medical schools in Germany. The scores were analysed for the overall test and differentiated according to the subject groups and competency areas.
Results: A pool of more than 200 MCQs was compiled by the students for pilot use, of which 118 student-generated MCQs were used in the progress test. University instructors supplemented this pool with 26 MCQs, which primarily addressed the area of scientific skills. The post-review showed that student-generated MCQs were of high quality with regard to test statistic criteria and content. Overall, the progress test displayed a very high reliability. When the academic years were compared, the progress test mapped out over the course of study not only by the overall test but also in terms of the subject groups and competency areas.
Outlook: Further development in cooperation with students will be continued. Focus will be on compiling additional questions and test formats that can represent competency at a higher skill level, such as key feature questions, situational judgement test questions and OSCE. In addition, the feedback formats will be successively expanded. The intention is also to offer the formative competency-based progress test online.
{"title":"Development of a competency-based formative progress test with student-generated MCQs: Results from a multi-centre pilot study.","authors":"Stefan Wagener, Andreas Möltner, Sevgi Tımbıl, Maryna Gornostayeva, Jobst-Hendrik Schultz, Peter Brüstle, Daniela Mohr, Anna Vander Beken, Julian Better, Martin Fries, Marc Gottschalk, Janine Günther, Laura Herrmann, Christian Kreisel, Tobias Moczko, Claudius Illg, Adam Jassowicz, Andreas Müller, Moritz Niesert, Felix Strübing, Jana Jünger","doi":"10.3205/zma000988","DOIUrl":"10.3205/zma000988","url":null,"abstract":"<p><strong>Introduction: </strong>Progress tests provide students feedback on their level of proficiency over the course of their medical studies. Peer-assisted learning and competency-based education have become increasingly important in medical education. Although progress tests have been proven to be useful as a longitudinal feedback instrument, there are currently no progress tests that have been created in cooperation with students or that focus on competency in medical education. In this study, we investigated the extent to which students can be included in the development of a progress test and demonstrated that aspects of knowledge related to competency can be represented on a competency-based progress test.</p><p><strong>Methods: </strong>A two-dimensional blueprint for 144 multiple-choice questions (MCQs) covering groups of medical subjects and groups of competency areas was generated by three expert groups for developing the competency-based progress test. A total of 31 students from seven medical schools in Germany actively participated in this exercise. After completing an intensive and comprehensive training programme, the students generated and reviewed the test questions for the competency-based progress test using a separate platform of the ItemManagementSystem (IMS). This test was administered as a formative test to 469 students in a pilot study in November 2013 at eight medical schools in Germany. The scores were analysed for the overall test and differentiated according to the subject groups and competency areas.</p><p><strong>Results: </strong>A pool of more than 200 MCQs was compiled by the students for pilot use, of which 118 student-generated MCQs were used in the progress test. University instructors supplemented this pool with 26 MCQs, which primarily addressed the area of scientific skills. The post-review showed that student-generated MCQs were of high quality with regard to test statistic criteria and content. Overall, the progress test displayed a very high reliability. When the academic years were compared, the progress test mapped out over the course of study not only by the overall test but also in terms of the subject groups and competency areas.</p><p><strong>Outlook: </strong>Further development in cooperation with students will be continued. Focus will be on compiling additional questions and test formats that can represent competency at a higher skill level, such as key feature questions, situational judgement test questions and OSCE. In addition, the feedback formats will be successively expanded. The intention is also to offer the formative competency-based progress test online.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc46"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606478/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34102168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000980
Matthäus C Grasl, Rudolf Seemann, Michael Hanisch, Gregor Heiduschka, Karl Kremser, Dietmar Thurnher
Unlabelled: Revision courses should repeat already acquired knowledge and skills and mostly provide a basis for passing the following exam.
Aim: The aim of the study is to investigate the influence of a previously attended revision course on the grades achieved in a final exam (Ear, Nose and Throat Diseases). Additionally we ask the question whether the gender of the examiners plays a role concerning the marks or not.
Methods: 3961 exams at the Department of Ear, Nose and Throat (ENT) Diseases in Vienna were investigated, 725 with revision course (experimental group) and 3236 without previous revision course (comparison group). The revision courses were performed in a standardized way concerning form and content, interactive and case based. Both groups were examined uniform in regard to topics and time duration. 16 male and 6 female examiners were involved. The grading followed a five-level scale. The examination marks were calculated in the arithmetic mean and median value for the entire sample, gender dependence was calculated according to the Wilcoxon-Mann-Whitney-Test. The inferential statistics included single- and multiple factorial analyses of variance as well as uni- and multivariate regression models.
Results: The experimental group achieved a grade average of 2.54 compared with 2.46 for the comparison group. Splitting up into male and female examiners, an average of 2.54 and 2.58 resp. for the experimental group and 2.44 and 2.61 resp. for the comparison group resulted. Female examiner marked significantly lower grades in comparison to their male colleagues (P= 0.001926).
Conclusions: The ENT revision course did not improve the grade averages of the final ENT exam. Female examiners grade stricter than male examiners. There was no difference concerning grades 4 (pass) and 5 (fail) but female examiners grade less with mark 1.
{"title":"Influence of a revision course and the gender of examiners on the grades of the final ENT exam--a retrospective review of 3961 exams.","authors":"Matthäus C Grasl, Rudolf Seemann, Michael Hanisch, Gregor Heiduschka, Karl Kremser, Dietmar Thurnher","doi":"10.3205/zma000980","DOIUrl":"https://doi.org/10.3205/zma000980","url":null,"abstract":"<p><strong>Unlabelled: </strong>Revision courses should repeat already acquired knowledge and skills and mostly provide a basis for passing the following exam.</p><p><strong>Aim: </strong>The aim of the study is to investigate the influence of a previously attended revision course on the grades achieved in a final exam (Ear, Nose and Throat Diseases). Additionally we ask the question whether the gender of the examiners plays a role concerning the marks or not.</p><p><strong>Methods: </strong>3961 exams at the Department of Ear, Nose and Throat (ENT) Diseases in Vienna were investigated, 725 with revision course (experimental group) and 3236 without previous revision course (comparison group). The revision courses were performed in a standardized way concerning form and content, interactive and case based. Both groups were examined uniform in regard to topics and time duration. 16 male and 6 female examiners were involved. The grading followed a five-level scale. The examination marks were calculated in the arithmetic mean and median value for the entire sample, gender dependence was calculated according to the Wilcoxon-Mann-Whitney-Test. The inferential statistics included single- and multiple factorial analyses of variance as well as uni- and multivariate regression models.</p><p><strong>Results: </strong>The experimental group achieved a grade average of 2.54 compared with 2.46 for the comparison group. Splitting up into male and female examiners, an average of 2.54 and 2.58 resp. for the experimental group and 2.44 and 2.61 resp. for the comparison group resulted. Female examiner marked significantly lower grades in comparison to their male colleagues (P= 0.001926).</p><p><strong>Conclusions: </strong>The ENT revision course did not improve the grade averages of the final ENT exam. Female examiners grade stricter than male examiners. There was no difference concerning grades 4 (pass) and 5 (fail) but female examiners grade less with mark 1.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc38"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606481/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34100622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000982
Christoph Berendonk, Christian Schirlo, Gianmarco Balestra, Raphael Bonvin, Sabine Feller, Philippe Huber, Ernst Jünger, Matteo Monti, Kai Schnabel, Christine Beyeler, Sissel Guttormsen, Sören Huwendiek
Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented.
Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure.
Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination.
Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking.
{"title":"The new final Clinical Skills examination in human medicine in Switzerland: Essential steps of exam development, implementation and evaluation, and central insights from the perspective of the national Working Group.","authors":"Christoph Berendonk, Christian Schirlo, Gianmarco Balestra, Raphael Bonvin, Sabine Feller, Philippe Huber, Ernst Jünger, Matteo Monti, Kai Schnabel, Christine Beyeler, Sissel Guttormsen, Sören Huwendiek","doi":"10.3205/zma000982","DOIUrl":"https://doi.org/10.3205/zma000982","url":null,"abstract":"<p><strong>Objective: </strong>Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented.</p><p><strong>Methods: </strong>Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure.</p><p><strong>Results: </strong>The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination.</p><p><strong>Conclusion: </strong>The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc40"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34100624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000984
Andreas Möltner, Sevgi Tımbıl, Jana Jünger
Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements.
Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy.
Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills.
Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.
{"title":"The reliability of the pass/fail decision for assessments comprised of multiple components.","authors":"Andreas Möltner, Sevgi Tımbıl, Jana Jünger","doi":"10.3205/zma000984","DOIUrl":"10.3205/zma000984","url":null,"abstract":"<p><strong>Objective: </strong>The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When \"conjunctively\" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements.</p><p><strong>Method: </strong>The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy.</p><p><strong>Results: </strong>Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills.</p><p><strong>Conclusion: </strong>The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc42"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34102164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000979
Ajit Johannes Thamburaj, Konstantin Brass, Manfred Herrmann, Jana Jünger
On February 9 and February 1
{"title":"8th meeting of the medical assessment consortium UCAN: \"Collaborative Perspectives for Competency-based and Quality-assured Medical Assessment\".","authors":"Ajit Johannes Thamburaj, Konstantin Brass, Manfred Herrmann, Jana Jünger","doi":"10.3205/zma000979","DOIUrl":"https://doi.org/10.3205/zma000979","url":null,"abstract":"On February 9 and February 1","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc37"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606488/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34100621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000986
Silke Biller, Martin Boeker, Götz Fabry, Marianne Giesler
Aim: Using the data from graduate surveys, this study aims to analyze which factors related to teaching and learning at the Freiburg Faculty of Medicine can influence study success.
Background: Study success and the factors influencing it have long been the subject of investigation, with study success being measured in terms of easily quantifiable indicators (final grades, student satisfaction, etc.). In recent years, it has also frequently been assessed in terms of graduate competency levels. Graduate surveys are considered suitable instruments for measuring these dimensions of study success.
Method: Data from three Freiburg graduate surveys conducted one and a half years after graduation were drawn upon for the analysis. Study success was operationalized using four indicators: results on the written section of the M2 exam, self-assessment of medical expertise and scientific expertise, and student satisfaction. Using multiple regression analyses, the predictive power was calculated for selected variables, also measured by the graduate surveys, for the different study success indicators.
Results: It was possible to identify models that contribute slightly or moderately to the prediction of study success. The score earned on the university entrance qualification demonstrated itself to be the strongest predictor for forecasting the M2 written exam: R(2) is between 0.08 and 0.22 for the three surveys. Different variables specific to degree program structure and teaching are helpful for predicting medical expertise (R(2)=0.04-0.32) and student satisfaction (R(2)=0.12-0.35). The two variables, structure and curricular sequencing of the degree program and combination of theory and practice, show themselves to be significant, sample-invariant predictors (β-weight(Structure)=0.21-0.58, β-weight(Combination)=0.27-0.56). For scientific expertise, no sample-independent predictors could be determined.
Conclusion: Factors describing teaching hardly provide any assistance when predicting the written M2 exam score, which makes sense to the extent that teaching goes far beyond the heavily knowledge-based content of the written M2 exam. The lack of predictability for scientific expertise is most likely explained in that these have been only rarely included in the curriculum and often inexplicitly so. The variable combination of theory and practice appears to be significant for imparting medical expertise and the development of student satisfaction. The extent to which these relationships are practically relevant needs to be explored in further studies. A specific limitation is that the measurement of expertise and skill is based solely on self-assessments.
{"title":"Impact of the Medical Faculty on Study Success in Freiburg: Results from Graduate Surveys.","authors":"Silke Biller, Martin Boeker, Götz Fabry, Marianne Giesler","doi":"10.3205/zma000986","DOIUrl":"10.3205/zma000986","url":null,"abstract":"<p><strong>Aim: </strong>Using the data from graduate surveys, this study aims to analyze which factors related to teaching and learning at the Freiburg Faculty of Medicine can influence study success.</p><p><strong>Background: </strong>Study success and the factors influencing it have long been the subject of investigation, with study success being measured in terms of easily quantifiable indicators (final grades, student satisfaction, etc.). In recent years, it has also frequently been assessed in terms of graduate competency levels. Graduate surveys are considered suitable instruments for measuring these dimensions of study success.</p><p><strong>Method: </strong>Data from three Freiburg graduate surveys conducted one and a half years after graduation were drawn upon for the analysis. Study success was operationalized using four indicators: results on the written section of the M2 exam, self-assessment of medical expertise and scientific expertise, and student satisfaction. Using multiple regression analyses, the predictive power was calculated for selected variables, also measured by the graduate surveys, for the different study success indicators.</p><p><strong>Results: </strong>It was possible to identify models that contribute slightly or moderately to the prediction of study success. The score earned on the university entrance qualification demonstrated itself to be the strongest predictor for forecasting the M2 written exam: R(2) is between 0.08 and 0.22 for the three surveys. Different variables specific to degree program structure and teaching are helpful for predicting medical expertise (R(2)=0.04-0.32) and student satisfaction (R(2)=0.12-0.35). The two variables, structure and curricular sequencing of the degree program and combination of theory and practice, show themselves to be significant, sample-invariant predictors (β-weight(Structure)=0.21-0.58, β-weight(Combination)=0.27-0.56). For scientific expertise, no sample-independent predictors could be determined.</p><p><strong>Conclusion: </strong>Factors describing teaching hardly provide any assistance when predicting the written M2 exam score, which makes sense to the extent that teaching goes far beyond the heavily knowledge-based content of the written M2 exam. The lack of predictability for scientific expertise is most likely explained in that these have been only rarely included in the curriculum and often inexplicitly so. The variable combination of theory and practice appears to be significant for imparting medical expertise and the development of student satisfaction. The extent to which these relationships are practically relevant needs to be explored in further studies. A specific limitation is that the measurement of expertise and skill is based solely on self-assessments.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc44"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606483/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34102166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000981
Angela Schickler, Peter Brüstle, Silke Biller
Aim: The aim of this study is to analyze the grades given for the oral/practical part of the German State Examination at the Medical Faculty of Freiburg. We examined whether or not the grades given for the written and the oral/practical examinations correlated and if differences in grading between the Freiburg University Medical Center (UMC) and the other teaching hospitals could be found. In order to improve the quality of the state examination, the medical school has been offering standardized training for examiners for several years. We evaluated whether or not trained and untrained examiners differed in their grading of the exam and how these differences have changed over time.
Methods: The results of the 2012 spring and fall exams were analyzed (N=315). The relevant data set was made available to us by the Baden-Württemberg Examination Office (Landesprüfungsamt). The data were analyzed by means of descriptive and inferential statistics.
Results: We observed a correlation of ρ=0.460** between the grades for the written and the oral/practical exams. The UMC and the teaching hospitals did not differ significantly in their grade distributions. Compared to untrained examiners, trained ones assigned the grade of "very good" less often. Furthermore, they displayed a significantly higher variance in the grades given (p=0.007, phi=0.165). This effect is stronger when concentrating specifically on those examiners who took part in the training less than a year before.
Conclusion: The results of this study suggest that the standardized training for examiners at the Medical Faculty of Freiburg is effective for quality assurance. As a consequence, more examiners should be motivated to take part in the training.
{"title":"The Final Oral/Practical State Examination at Freiburg Medical Faculty in 2012--Analysis of grading to test quality assurance.","authors":"Angela Schickler, Peter Brüstle, Silke Biller","doi":"10.3205/zma000981","DOIUrl":"10.3205/zma000981","url":null,"abstract":"<p><strong>Aim: </strong>The aim of this study is to analyze the grades given for the oral/practical part of the German State Examination at the Medical Faculty of Freiburg. We examined whether or not the grades given for the written and the oral/practical examinations correlated and if differences in grading between the Freiburg University Medical Center (UMC) and the other teaching hospitals could be found. In order to improve the quality of the state examination, the medical school has been offering standardized training for examiners for several years. We evaluated whether or not trained and untrained examiners differed in their grading of the exam and how these differences have changed over time.</p><p><strong>Methods: </strong>The results of the 2012 spring and fall exams were analyzed (N=315). The relevant data set was made available to us by the Baden-Württemberg Examination Office (Landesprüfungsamt). The data were analyzed by means of descriptive and inferential statistics.</p><p><strong>Results: </strong>We observed a correlation of ρ=0.460** between the grades for the written and the oral/practical exams. The UMC and the teaching hospitals did not differ significantly in their grade distributions. Compared to untrained examiners, trained ones assigned the grade of \"very good\" less often. Furthermore, they displayed a significantly higher variance in the grades given (p=0.007, phi=0.165). This effect is stronger when concentrating specifically on those examiners who took part in the training less than a year before.</p><p><strong>Conclusion: </strong>The results of this study suggest that the standardized training for examiners at the Medical Faculty of Freiburg is effective for quality assurance. As a consequence, more examiners should be motivated to take part in the training.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc39"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34100623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000985
Ara Tekian, John Norcini
It is not unusual for institutions around the world to have fixed standards (e.g., 60%) for all of their examinations. This creates problems in the creation of examinations, since all of the content has to be chosen with an eye toward this fixed standard. As a result, the validity of the decisions based on these examinations can be adversely influenced, making them less useful for their intended purposes. Over the past several decades, many institutions have addressed this problem by using standard setting methods which are defensible, acceptable, and credible [1], [2]. Many methods are available and the major reasons to use them is to ensure that test content is appropriately selected and to be as fair to the students and other test users as possible [2], [3]. One barrier to the wider use of these methods is that some institutions object to the fact that the fixed standard (e.g., 60%) has not been applied. However, it is possible to rescale the passing score so that it is equal to the fixed standard, and then apply that same rescaling calculation to all of the test scores. This ensures that the institutional guidelines are not violated and allows the application of accepted methods of standard-setting. In turn, the application of these methods allow the content of the test to be selected without regard to a fixed standard, increases the validity of the decisions being made, and ensures a fairer and more accurate test of students.
{"title":"Overcome the 60% passing score and improve the quality of assessment.","authors":"Ara Tekian, John Norcini","doi":"10.3205/zma000985","DOIUrl":"https://doi.org/10.3205/zma000985","url":null,"abstract":"<p><p>It is not unusual for institutions around the world to have fixed standards (e.g., 60%) for all of their examinations. This creates problems in the creation of examinations, since all of the content has to be chosen with an eye toward this fixed standard. As a result, the validity of the decisions based on these examinations can be adversely influenced, making them less useful for their intended purposes. Over the past several decades, many institutions have addressed this problem by using standard setting methods which are defensible, acceptable, and credible [1], [2]. Many methods are available and the major reasons to use them is to ensure that test content is appropriately selected and to be as fair to the students and other test users as possible [2], [3]. One barrier to the wider use of these methods is that some institutions object to the fact that the fixed standard (e.g., 60%) has not been applied. However, it is possible to rescale the passing score so that it is equal to the fixed standard, and then apply that same rescaling calculation to all of the test scores. This ensures that the institutional guidelines are not violated and allows the application of accepted methods of standard-setting. In turn, the application of these methods allow the content of the test to be selected without regard to a fixed standard, increases the validity of the decisions being made, and ensures a fairer and more accurate test of students. </p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc43"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34102165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-15eCollection Date: 2015-01-01DOI: 10.3205/zma000987
Christoph Profanter, Alexander Perathoner
Objective: Sufficient teaching and assessing clinical skills in the undergraduate setting becomes more and more important. In a surgical skills-lab course at the Medical University of Innsbruck fourth year students were teached with DOPS (direct observation of procedural skills). We analyzed whether DOPS worked or not in this setting, which performance levels could be reached compared to tutor teaching (one tutor, 5 students) and which curricular side effects could be observed.
Methods: In a prospective randomized trial in summer 2013 (April - June) four competence-level-based skills were teached in small groups during one week: surgical abdominal examination, urethral catheterization (phantom), rectal-digital examination (phantom), handling of central venous catheters. Group A was teached with DOPS, group B with a classical tutor system. Both groups underwent an OSCE (objective structured clinical examination) for assessment. 193 students were included in the study. Altogether 756 OSCE´s were carried out, 209 (27,6%) in the DOPS- and 547 (72,3%) in the tutor-group.
Results: Both groups reached high performance levels. In the first month there was a statistically significant difference (p<0,05) in performance of 95% positive OSCE items in the DOPS-group versus 88% in the tutor group. In the following months the performance rates showed no difference anymore and came to 90% in both groups. In practical skills the analysis revealed a high correspondence between positive DOPS (92,4%) and OSCE (90,8%) results.
Discussion: As shown by our data DOPS furnish high performance of clinical skills and work well in the undergraduate setting. Due to the high correspondence of DOPS and OSCE results DOPS should be considered as preferred assessment tool in a students skills-lab. The approximation of performance-rates within the months after initial superiority of DOPS could be explained by an interaction between DOPS and tutor system: DOPS elements seem to have improved tutoring and performance rates as well. DOPS in students 'skills-lab afford structured feedback and assessment without increased personnel and financial resources compared to classic small group training.
Conclusion: In summary, this study shows that DOPS represent an efficient method in teaching clinical skills. Their effects on didactic culture reach beyond the positive influence of performance rates.
{"title":"DOPS (Direct Observation of Procedural Skills) in undergraduate skills-lab: Does it work? Analysis of skills-performance and curricular side effects.","authors":"Christoph Profanter, Alexander Perathoner","doi":"10.3205/zma000987","DOIUrl":"https://doi.org/10.3205/zma000987","url":null,"abstract":"<p><strong>Objective: </strong>Sufficient teaching and assessing clinical skills in the undergraduate setting becomes more and more important. In a surgical skills-lab course at the Medical University of Innsbruck fourth year students were teached with DOPS (direct observation of procedural skills). We analyzed whether DOPS worked or not in this setting, which performance levels could be reached compared to tutor teaching (one tutor, 5 students) and which curricular side effects could be observed.</p><p><strong>Methods: </strong>In a prospective randomized trial in summer 2013 (April - June) four competence-level-based skills were teached in small groups during one week: surgical abdominal examination, urethral catheterization (phantom), rectal-digital examination (phantom), handling of central venous catheters. Group A was teached with DOPS, group B with a classical tutor system. Both groups underwent an OSCE (objective structured clinical examination) for assessment. 193 students were included in the study. Altogether 756 OSCE´s were carried out, 209 (27,6%) in the DOPS- and 547 (72,3%) in the tutor-group.</p><p><strong>Results: </strong>Both groups reached high performance levels. In the first month there was a statistically significant difference (p<0,05) in performance of 95% positive OSCE items in the DOPS-group versus 88% in the tutor group. In the following months the performance rates showed no difference anymore and came to 90% in both groups. In practical skills the analysis revealed a high correspondence between positive DOPS (92,4%) and OSCE (90,8%) results.</p><p><strong>Discussion: </strong>As shown by our data DOPS furnish high performance of clinical skills and work well in the undergraduate setting. Due to the high correspondence of DOPS and OSCE results DOPS should be considered as preferred assessment tool in a students skills-lab. The approximation of performance-rates within the months after initial superiority of DOPS could be explained by an interaction between DOPS and tutor system: DOPS elements seem to have improved tutoring and performance rates as well. DOPS in students 'skills-lab afford structured feedback and assessment without increased personnel and financial resources compared to classic small group training.</p><p><strong>Conclusion: </strong>In summary, this study shows that DOPS represent an efficient method in teaching clinical skills. Their effects on didactic culture reach beyond the positive influence of performance rates.</p>","PeriodicalId":30054,"journal":{"name":"GMS Zeitschrift fur Medizinische Ausbildung","volume":"32 4","pages":"Doc45"},"PeriodicalIF":0.0,"publicationDate":"2015-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3205/zma000987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34102167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}