Development and validation of the EDUcational Course Assessment TOOLkit (EDUCATOOL) – a 12-item questionnaire for evaluation of training and learning programmes
Tena Matolić, D. Jurakić, Zrinka Greblo Jurakić, Tošo Maršić, Ž. Pedišić
{"title":"Development and validation of the EDUcational Course Assessment TOOLkit (EDUCATOOL) – a 12-item questionnaire for evaluation of training and learning programmes","authors":"Tena Matolić, D. Jurakić, Zrinka Greblo Jurakić, Tošo Maršić, Ž. Pedišić","doi":"10.3389/feduc.2023.1314584","DOIUrl":null,"url":null,"abstract":"The instruments for evaluation of educational courses are often highly complex and specifically designed for a given type of training. Therefore, the aims of this study were to develop a simple and generic EDUcational Course Assessment TOOLkit (EDUCATOOL) and determine its measurement properties.The development of EDUCATOOL encompassed: (1) a literature review; (2) drafting the questionnaire through open discussions between three researchers; (3) Delphi survey with five content experts; and (4) consultations with 20 end-users. A subsequent validity and reliability study involved 152 university students who participated in a short educational course. Immediately after the course and a week later, the participants completed the EDUCATOOL post-course questionnaire. Six weeks after the course and a week later, they completed the EDUCATOOL follow-up questionnaire. To establish the convergent validity of EDUCATOOL, the participants also completed the “Questionnaire for Professional Training Evaluation.”The EDUCATOOL questionnaires include 12 items grouped into the following evaluation components: (1) reaction; (2) learning; (3) behavioural intent (post-course)/behaviour (follow-up); and (4) expected outcomes (post-course)/results (follow-up). In confirmatory factor analyses, comparative fit index (CFI = 0.99 and 1.00), root mean square error of approximation (RMSEA = 0.05 and 0.03), and standardised root mean square residual (SRMR = 0.07 and 0.03) indicated adequate goodness of fit for the proposed factor structure of the EDUCATOOL questionnaires. The intraclass correlation coefficients (ICCs) for convergent validity of the post-course and follow-up questionnaires were 0.71 (95% confidence interval [CI]: 0.61, 0.78) and 0.86 (95% CI: 0.78, 0.91), respectively. The internal consistency reliability of the evaluation components expressed using Cronbach’s alpha ranged from 0.83 (95% CI: 0.78, 0.87) to 0.88 (95% CI: 0.84, 0.92) for the post-course questionnaire and from 0.95 (95% CI: 0.93, 0.96) to 0.97 (95% CI: 0.95, 0.98) for the follow-up questionnaire. The test–retest reliability ICCs for the overall evaluation scores of the post-course and follow-up questionnaires were 0.87 (95% CI: 0.78, 0.92) and 0.91 (95% CI: 0.85, 0.94), respectively.The EDUCATOOL questionnaires have adequate factorial validity, convergent validity, internal consistency, and test–retest reliability and they can be used to evaluate training and learning programmes.","PeriodicalId":52290,"journal":{"name":"Frontiers in Education","volume":"5 11","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/feduc.2023.1314584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
The instruments for evaluation of educational courses are often highly complex and specifically designed for a given type of training. Therefore, the aims of this study were to develop a simple and generic EDUcational Course Assessment TOOLkit (EDUCATOOL) and determine its measurement properties.The development of EDUCATOOL encompassed: (1) a literature review; (2) drafting the questionnaire through open discussions between three researchers; (3) Delphi survey with five content experts; and (4) consultations with 20 end-users. A subsequent validity and reliability study involved 152 university students who participated in a short educational course. Immediately after the course and a week later, the participants completed the EDUCATOOL post-course questionnaire. Six weeks after the course and a week later, they completed the EDUCATOOL follow-up questionnaire. To establish the convergent validity of EDUCATOOL, the participants also completed the “Questionnaire for Professional Training Evaluation.”The EDUCATOOL questionnaires include 12 items grouped into the following evaluation components: (1) reaction; (2) learning; (3) behavioural intent (post-course)/behaviour (follow-up); and (4) expected outcomes (post-course)/results (follow-up). In confirmatory factor analyses, comparative fit index (CFI = 0.99 and 1.00), root mean square error of approximation (RMSEA = 0.05 and 0.03), and standardised root mean square residual (SRMR = 0.07 and 0.03) indicated adequate goodness of fit for the proposed factor structure of the EDUCATOOL questionnaires. The intraclass correlation coefficients (ICCs) for convergent validity of the post-course and follow-up questionnaires were 0.71 (95% confidence interval [CI]: 0.61, 0.78) and 0.86 (95% CI: 0.78, 0.91), respectively. The internal consistency reliability of the evaluation components expressed using Cronbach’s alpha ranged from 0.83 (95% CI: 0.78, 0.87) to 0.88 (95% CI: 0.84, 0.92) for the post-course questionnaire and from 0.95 (95% CI: 0.93, 0.96) to 0.97 (95% CI: 0.95, 0.98) for the follow-up questionnaire. The test–retest reliability ICCs for the overall evaluation scores of the post-course and follow-up questionnaires were 0.87 (95% CI: 0.78, 0.92) and 0.91 (95% CI: 0.85, 0.94), respectively.The EDUCATOOL questionnaires have adequate factorial validity, convergent validity, internal consistency, and test–retest reliability and they can be used to evaluate training and learning programmes.