Pub Date : 2023-01-01Epub Date: 2023-12-08DOI: 10.3352/jeehp.2023.20.36
Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler
Purpose: The purpose of this project was to implement a process for learner-driven, formative, prospective, ad-hoc, entrustment assessment in Doctor of Physical Therapy clinical education. Our goals were to develop an innovative entrustment assessment tool, and then explore whether the tool detected (1) differences between learners at different stages of development and (2) differences within learners across the course of a clinical education experience. We also investigated whether there was a relationship between the number of assessments and change in performance.
Methods: A prospective, observational, cohort of clinical instructors (CIs) was recruited to perform learner-driven, formative, ad-hoc, prospective, entrustment assessments. Two entrustable professional activities (EPAs) were used: (1) gather a history and perform an examination and (2) implement and modify the plan of care, as needed. CIs provided a rating on the entrustment scale and provided narrative support for their rating.
Results: Forty-nine learners participated across 4 clinical experiences (CEs), resulting in 453 EPA learner-driven assessments. For both EPAs, statistically significant changes were detected both between learners at different stages of development and within learners across the course of a CE. Improvement within each CE was significantly related to the number of feedback opportunities.
Conclusion: The results of this pilot study provide preliminary support for the use of learner-driven, formative, ad-hoc assessments of competence based on EPAs with a novel entrustment scale. The number of formative assessments requested correlated with change on the EPA scale, suggesting that formative feedback may augment performance improvement.
目的:本项目的目的是在物理治疗学博士临床教育中实施一个以学习者为导向的、形成性的、前瞻性的、临时性的委托评估过程。我们的目标是开发一种创新的委托评估工具,然后探索该工具是否能发现:(1) 处于不同发展阶段的学习者之间的差异;(2) 在整个临床教育过程中学习者内部的差异。我们还研究了评估次数与成绩变化之间是否存在关系:方法:我们招募了一批前瞻性、观察性临床教师(CIs),对他们进行以学习者为导向的、形成性、临时性、前瞻性委托评估。采用了两种委托专业活动(EPA):(1) 收集病史并进行检查;(2) 根据需要实施并修改护理计划。受托者对委托量表进行评分,并为其评分提供叙述支持:49名学员参加了4次临床体验(CE),共进行了453次由学员驱动的EPA评估。在这两项 EPA 中,处于不同发展阶段的学员之间以及学员内部在 CE 过程中都出现了统计学意义上的显著变化。每个 CE 中的改进都与反馈机会的数量有显著关系:这项试点研究的结果初步支持了使用以学习者为导向、基于 EPA 的形成性临时能力评估和新颖的委托量表。要求进行形成性评估的次数与 EPA 量表的变化相关,这表明形成性反馈可能会促进成绩的提高。
{"title":"Use of learner-driven, formative, ad-hoc, prospective assessment of competence in physical therapist clinical education in the United States: a prospective cohort study","authors":"Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler","doi":"10.3352/jeehp.2023.20.36","DOIUrl":"10.3352/jeehp.2023.20.36","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this project was to implement a process for learner-driven, formative, prospective, ad-hoc, entrustment assessment in Doctor of Physical Therapy clinical education. Our goals were to develop an innovative entrustment assessment tool, and then explore whether the tool detected (1) differences between learners at different stages of development and (2) differences within learners across the course of a clinical education experience. We also investigated whether there was a relationship between the number of assessments and change in performance.</p><p><strong>Methods: </strong>A prospective, observational, cohort of clinical instructors (CIs) was recruited to perform learner-driven, formative, ad-hoc, prospective, entrustment assessments. Two entrustable professional activities (EPAs) were used: (1) gather a history and perform an examination and (2) implement and modify the plan of care, as needed. CIs provided a rating on the entrustment scale and provided narrative support for their rating.</p><p><strong>Results: </strong>Forty-nine learners participated across 4 clinical experiences (CEs), resulting in 453 EPA learner-driven assessments. For both EPAs, statistically significant changes were detected both between learners at different stages of development and within learners across the course of a CE. Improvement within each CE was significantly related to the number of feedback opportunities.</p><p><strong>Conclusion: </strong>The results of this pilot study provide preliminary support for the use of learner-driven, formative, ad-hoc assessments of competence based on EPAs with a novel entrustment scale. The number of formative assessments requested correlated with change on the EPA scale, suggesting that formative feedback may augment performance improvement.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"36"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10823263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138811993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-04-06DOI: 10.3352/jeehp.2023.20.12
Ali Khalafi, Yasamin Sharbatdar, Nasrin Khajeali, Mohammad Hosein Haghighizadeh, Mahshid Vaziri
Purpose: The present study aimed to investigate the effect of a mini-clinical evaluation exercise (CEX) assessment on improving the clinical skills of nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
Methods: This study started on November 1, 2022, and ended on December 1, 2022. It was conducted among 50 nurse anesthesia students divided into intervention and control groups. The intervention group’s clinical skills were evaluated 4 times using the mini-CEX method. In contrast, the same skills were evaluated in the control group based on the conventional method—that is, general supervision by the instructor during the internship and a summative evaluation based on a checklist at the end of the course. The intervention group students also filled out a questionnaire to measure their satisfaction with the miniCEX method.
Results: The mean score of the students in both the control and intervention groups increased significantly on the post-test (P<0.0001), but the improvement in the scores of the intervention group was significantly greater compared with the control group (P<0.0001). The overall mean score for satisfaction in the intervention group was 76.3 out of a maximum of 95.
Conclusion: The findings of this study showed that using mini-CEX as a formative evaluation method to evaluate clinical skills had a significant effect on the improvement of nurse anesthesia students’ clinical skills, and they had a very favorable opinion about this evaluation method.
{"title":"Improvement of the clinical skills of nurse anesthesia students using mini-clinical evaluation exercises in Iran: a randomized controlled study.","authors":"Ali Khalafi, Yasamin Sharbatdar, Nasrin Khajeali, Mohammad Hosein Haghighizadeh, Mahshid Vaziri","doi":"10.3352/jeehp.2023.20.12","DOIUrl":"10.3352/jeehp.2023.20.12","url":null,"abstract":"<p><strong>Purpose: </strong>The present study aimed to investigate the effect of a mini-clinical evaluation exercise (CEX) assessment on improving the clinical skills of nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.</p><p><strong>Methods: </strong>This study started on November 1, 2022, and ended on December 1, 2022. It was conducted among 50 nurse anesthesia students divided into intervention and control groups. The intervention group’s clinical skills were evaluated 4 times using the mini-CEX method. In contrast, the same skills were evaluated in the control group based on the conventional method—that is, general supervision by the instructor during the internship and a summative evaluation based on a checklist at the end of the course. The intervention group students also filled out a questionnaire to measure their satisfaction with the miniCEX method.</p><p><strong>Results: </strong>The mean score of the students in both the control and intervention groups increased significantly on the post-test (P<0.0001), but the improvement in the scores of the intervention group was significantly greater compared with the control group (P<0.0001). The overall mean score for satisfaction in the intervention group was 76.3 out of a maximum of 95.</p><p><strong>Conclusion: </strong>The findings of this study showed that using mini-CEX as a formative evaluation method to evaluate clinical skills had a significant effect on the improvement of nurse anesthesia students’ clinical skills, and they had a very favorable opinion about this evaluation method.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"12"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10209614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9524415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-08-27DOI: 10.3352/jeehp.2023.20.24
Seung-Kwon Myung
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.
{"title":"How to review and assess a systematic review and meta-analysis article: a methodological study (secondary publication).","authors":"Seung-Kwon Myung","doi":"10.3352/jeehp.2023.20.24","DOIUrl":"10.3352/jeehp.2023.20.24","url":null,"abstract":"<p><p>Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"24"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10449599/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10477521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-11-20DOI: 10.3352/jeehp.2023.20.30
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila
Purpose: We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).
Methods: This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).
Results: GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09-0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.
Conclusion: Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.
{"title":"Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study.","authors":"Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila","doi":"10.3352/jeehp.2023.20.30","DOIUrl":"10.3352/jeehp.2023.20.30","url":null,"abstract":"<p><strong>Purpose: </strong>We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).</p><p><strong>Methods: </strong>This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).</p><p><strong>Results: </strong>GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09-0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.</p><p><strong>Conclusion: </strong>Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"30"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11009012/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-12-04DOI: 10.3352/jeehp.2023.20.35
Kyung Eui Bae, Geum Hee Jeong
Purpose: This study aimed to evaluate the impact of a transcultural nursing course on enhancing the cultural competency of graduate nursing students in Korea. We hypothesized that participants’ cultural competency would significantly improve in areas such as communication, biocultural ecology and family, dietary habits, death rituals, spirituality, equity, and empowerment and intermediation after completing the course. Furthermore, we assessed the participants’ overall satisfaction with the course.
Methods: A before-and-after study was conducted with graduate nursing students at Hallym University, Chuncheon, Korea, from March to June 2023. A transcultural nursing course was developed based on Giger & Haddad’s transcultural nursing model and Purnell’s theoretical model of cultural competence. Data was collected using a cultural competence scale for registered nurses developed by Kim and his colleagues. A total of 18 students participated, and the paired t-test was employed to compare pre-and post-intervention scores.
Results: The study revealed significant improvements in all 7 categories of cultural nursing competence (P<0.01). Specifically, the mean differences in scores (pre–post) ranged from 0.74 to 1.09 across the categories. Additionally, participants expressed high satisfaction with the course, with an average score of 4.72 out of a maximum of 5.0.
Conclusion: The transcultural nursing course effectively enhanced the cultural competency of graduate nursing students. Such courses are imperative to ensure quality care for the increasing multicultural population in Korea.
{"title":"Effect of a transcultural nursing course on improving the cultural competency of nursing graduate students in Korea: a before-andafter study","authors":"Kyung Eui Bae, Geum Hee Jeong","doi":"10.3352/jeehp.2023.20.35","DOIUrl":"10.3352/jeehp.2023.20.35","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the impact of a transcultural nursing course on enhancing the cultural competency of graduate nursing students in Korea. We hypothesized that participants’ cultural competency would significantly improve in areas such as communication, biocultural ecology and family, dietary habits, death rituals, spirituality, equity, and empowerment and intermediation after completing the course. Furthermore, we assessed the participants’ overall satisfaction with the course.</p><p><strong>Methods: </strong>A before-and-after study was conducted with graduate nursing students at Hallym University, Chuncheon, Korea, from March to June 2023. A transcultural nursing course was developed based on Giger & Haddad’s transcultural nursing model and Purnell’s theoretical model of cultural competence. Data was collected using a cultural competence scale for registered nurses developed by Kim and his colleagues. A total of 18 students participated, and the paired t-test was employed to compare pre-and post-intervention scores.</p><p><strong>Results: </strong>The study revealed significant improvements in all 7 categories of cultural nursing competence (P<0.01). Specifically, the mean differences in scores (pre–post) ranged from 0.74 to 1.09 across the categories. Additionally, participants expressed high satisfaction with the course, with an average score of 4.72 out of a maximum of 5.0.</p><p><strong>Conclusion: </strong>The transcultural nursing course effectively enhanced the cultural competency of graduate nursing students. Such courses are imperative to ensure quality care for the increasing multicultural population in Korea.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"35"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10955218/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138478914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-11-22DOI: 10.3352/jeehp.2023.20.32
Catharina Hultgren, Annica Lindkvist, Volkan Özenci, Sophie Curbo
ChatGPT (GPT-3.5) has entered higher education and there is a need to determine how to use it effectively. This descriptive study compared the ability of GPT-3.5 and teachers to answer questions from dental students and construct detailed intended learning outcomes. When analyzed according to a Likert scale, we found that GPT-3.5 answered the questions from dental students in a similar or even more elaborate way compared to the answers that had previously been provided by a teacher. GPT-3.5 was also asked to construct detailed intended learning outcomes for a course in microbial pathogenesis, and when these were analyzed according to a Likert scale they were, to a large degree, found irrelevant. Since students are using GPT-3.5, it is important that instructors learn how to make the best use of it both to be able to advise students and to benefit from its potential.
{"title":"ChatGPT (GPT-3.5) as an assistant tool in microbial pathogenesis studies in Sweden: a cross-sectional comparative study","authors":"Catharina Hultgren, Annica Lindkvist, Volkan Özenci, Sophie Curbo","doi":"10.3352/jeehp.2023.20.32","DOIUrl":"10.3352/jeehp.2023.20.32","url":null,"abstract":"<p><p>ChatGPT (GPT-3.5) has entered higher education and there is a need to determine how to use it effectively. This descriptive study compared the ability of GPT-3.5 and teachers to answer questions from dental students and construct detailed intended learning outcomes. When analyzed according to a Likert scale, we found that GPT-3.5 answered the questions from dental students in a similar or even more elaborate way compared to the answers that had previously been provided by a teacher. GPT-3.5 was also asked to construct detailed intended learning outcomes for a course in microbial pathogenesis, and when these were analyzed according to a Likert scale they were, to a large degree, found irrelevant. Since students are using GPT-3.5, it is important that instructors learn how to make the best use of it both to be able to advise students and to benefit from its potential.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"32"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10725744/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138292042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 2022, computer-based testing (CBT) was introduced for the Korean Medical Licensing Examination (KMLE). CBT has also been expanded to the Korean Dental Licensing Examination, Korean Oriental Doctor Medical Licensing Examination, and Korean Care Worker Licensing Examination in 2023. Subsequently, 26 licensing examinations will be administered through CBT in 2025 [1]. For a more convenient and stable testing environment, the Korea Health Personnel Licensing Institute prepared 9 permanent test centers for CBT, which collectively have 1,500 seats (Fig. 1). If the number of examinees on a particular date surpasses 1,500, other sites will be leased. No technical problems occurred during the implementation of CBT for the KMLE. All medical schools in Korea adopted CBT, and no examinee complained of any difficulties in taking CBT. However, further improvements should be made after the transition to CBT. First, the standard setting of CBT still has not been adopted, although there were some studies on the cut score in the 2022 volume of the Journal of Educational Evaluation for Health Professions (JEEHP). Kim et al. [2] suggested that acceptable cut scores for Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh
{"title":"Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers","authors":"Sun Huh","doi":"10.3352/jeehp.2023.20.5","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.5","url":null,"abstract":"In 2022, computer-based testing (CBT) was introduced for the Korean Medical Licensing Examination (KMLE). CBT has also been expanded to the Korean Dental Licensing Examination, Korean Oriental Doctor Medical Licensing Examination, and Korean Care Worker Licensing Examination in 2023. Subsequently, 26 licensing examinations will be administered through CBT in 2025 [1]. For a more convenient and stable testing environment, the Korea Health Personnel Licensing Institute prepared 9 permanent test centers for CBT, which collectively have 1,500 seats (Fig. 1). If the number of examinees on a particular date surpasses 1,500, other sites will be leased. No technical problems occurred during the implementation of CBT for the KMLE. All medical schools in Korea adopted CBT, and no examinee complained of any difficulties in taking CBT. However, further improvements should be made after the transition to CBT. First, the standard setting of CBT still has not been adopted, although there were some studies on the cut score in the 2022 volume of the Journal of Educational Evaluation for Health Professions (JEEHP). Kim et al. [2] suggested that acceptable cut scores for Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"5"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9986465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9130437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.3352/jeehp.2023.20.18
Kyunghee Kim, So Young Kang, Younhee Kang, Youngran Kweon, Hyunjung Kim, Youngshin Song, Juyeon Cho, Mi-Young Choi, Hyun Su Lee
Purpose: This study aims to suggest the number of test items in each of 8 nursing activity categories of the Korean Nursing Licensing Examination, which comprises 134 activity statements including 275 items. The examination will be able to evaluate the minimum ability that nursing graduates must have to perform their duties.
Methods: Two opinion surveys involving the members of 7 academic societies were conducted from March 19 to May 14, 2021. The survey results were reviewed by members of 4 expert associations from May 21 to June 4, 2021. The results for revised numbers of items in each category were compared with those reported by Tak and his colleagues and the National Council License Examination for Registered Nurses of the United States.
Results: Based on 2 opinion surveys and previous studies, the suggestions for item allocation to 8 nursing activity categories of the Korean Nursing Licensing Examination in this study are as follows: 50 items for management of care and improvement of professionalism, 33 items for safety and infection control, 40 items for management of potential risk, 28 items for basic care, 47 items for physiological integrity and maintenance, 33 items for pharmacological and parenteral therapies, 24 items for psychosocial integrity and maintenance, and 20 items for health promotion and maintenance. Twenty other items related to health and medical laws were not included due to their mandatory status.
Conclusion: These suggestions for the number of test items for each activity category will be helpful in developing new items for the Korean Nursing Licensing Examination.
{"title":"Suggestion for item allocation to 8 nursing activity categories of the Korean Nursing Licensing Examination: a survey-based descriptive study.","authors":"Kyunghee Kim, So Young Kang, Younhee Kang, Youngran Kweon, Hyunjung Kim, Youngshin Song, Juyeon Cho, Mi-Young Choi, Hyun Su Lee","doi":"10.3352/jeehp.2023.20.18","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.18","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to suggest the number of test items in each of 8 nursing activity categories of the Korean Nursing Licensing Examination, which comprises 134 activity statements including 275 items. The examination will be able to evaluate the minimum ability that nursing graduates must have to perform their duties.</p><p><strong>Methods: </strong>Two opinion surveys involving the members of 7 academic societies were conducted from March 19 to May 14, 2021. The survey results were reviewed by members of 4 expert associations from May 21 to June 4, 2021. The results for revised numbers of items in each category were compared with those reported by Tak and his colleagues and the National Council License Examination for Registered Nurses of the United States.</p><p><strong>Results: </strong>Based on 2 opinion surveys and previous studies, the suggestions for item allocation to 8 nursing activity categories of the Korean Nursing Licensing Examination in this study are as follows: 50 items for management of care and improvement of professionalism, 33 items for safety and infection control, 40 items for management of potential risk, 28 items for basic care, 47 items for physiological integrity and maintenance, 33 items for pharmacological and parenteral therapies, 24 items for psychosocial integrity and maintenance, and 20 items for health promotion and maintenance. Twenty other items related to health and medical laws were not included due to their mandatory status.</p><p><strong>Conclusion: </strong>These suggestions for the number of test items for each activity category will be helpful in developing new items for the Korean Nursing Licensing Examination.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"18"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10352010/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9827813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning about one’s implicit bias is crucial for improving one’s cultural competency and thereby reducing health inequity. To evaluate bias among medical students following a previously developed cultural training program targeting New Zealand Māori, we developed a text-based, self-evaluation tool called the Similarity Rating Test (SRT). The development process of the SRT was resource-intensive, limiting its generalizability and applicability. Here, we explored the potential of ChatGPT, an automated chatbot, to assist in the development process of the SRT by comparing ChatGPT’s and students’ evaluations of the SRT. Despite results showing non-significant equivalence and difference between ChatGPT’s and students’ ratings, ChatGPT’s ratings were more consistent than students’ ratings. The consistency rate was higher for non-stereotypical than for stereotypical statements, regardless of rater type. Further studies are warranted to validate ChatGPT’s potential for assisting in SRT development for implementation in medical education and evaluation of ethnic stereotypes and related topics.
{"title":"Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study.","authors":"Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu","doi":"10.3352/jeehp.2023.20.17","DOIUrl":"10.3352/jeehp.2023.20.17","url":null,"abstract":"<p><p>Learning about one’s implicit bias is crucial for improving one’s cultural competency and thereby reducing health inequity. To evaluate bias among medical students following a previously developed cultural training program targeting New Zealand Māori, we developed a text-based, self-evaluation tool called the Similarity Rating Test (SRT). The development process of the SRT was resource-intensive, limiting its generalizability and applicability. Here, we explored the potential of ChatGPT, an automated chatbot, to assist in the development process of the SRT by comparing ChatGPT’s and students’ evaluations of the SRT. Despite results showing non-significant equivalence and difference between ChatGPT’s and students’ ratings, ChatGPT’s ratings were more consistent than students’ ratings. The consistency rate was higher for non-stereotypical than for stereotypical statements, regardless of rater type. Further studies are warranted to validate ChatGPT’s potential for assisting in SRT development for implementation in medical education and evaluation of ethnic stereotypes and related topics.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"17"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10356547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9839669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Nutrition support nurse is a member of a nutrition support team and is a health care professional who takes a significant part in all aspects of nutritional care. This study aims to investigate ways to improve the quality of tasks performed by nutrition support nurses through survey questionnaires in Korea.
Methods: An online survey was conducted between October 12 and November 31, 2018. The questionnaire consists of 36 items categorized into 5 subscales: nutrition-focused support care, education and counseling, consultation and coordination, research and quality improvement, and leadership. The importance-performance analysis method was used to confirm the relationship between the importance and performance of nutrition support nurses' tasks.
Results: A total of 101 nutrition support nurses participated in this survey. The importance (5.56±0.78) and performance (4.50±1.06) of nutrition support nurses' tasks showed a significant difference (t=11.27, P<0.001). Education, counseling/consultation, and participation in developing their processes and guidelines were identified as low-performance activities compared with their importance.
Conclusion: To intervene nutrition support effectively, nutrition support nurses should have the qualification or competency through the education program based on their practice. Improved awareness of nutrition support nurses participating in research and quality improvement activity for role development is required.
{"title":"Identifying the nutrition support nurses' tasks using importance-performance analysis in Korea: a descriptive study.","authors":"Jeong Yun Park","doi":"10.3352/jeehp.2023.20.3","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.3","url":null,"abstract":"<p><strong>Purpose: </strong>Nutrition support nurse is a member of a nutrition support team and is a health care professional who takes a significant part in all aspects of nutritional care. This study aims to investigate ways to improve the quality of tasks performed by nutrition support nurses through survey questionnaires in Korea.</p><p><strong>Methods: </strong>An online survey was conducted between October 12 and November 31, 2018. The questionnaire consists of 36 items categorized into 5 subscales: nutrition-focused support care, education and counseling, consultation and coordination, research and quality improvement, and leadership. The importance-performance analysis method was used to confirm the relationship between the importance and performance of nutrition support nurses' tasks.</p><p><strong>Results: </strong>A total of 101 nutrition support nurses participated in this survey. The importance (5.56±0.78) and performance (4.50±1.06) of nutrition support nurses' tasks showed a significant difference (t=11.27, P<0.001). Education, counseling/consultation, and participation in developing their processes and guidelines were identified as low-performance activities compared with their importance.</p><p><strong>Conclusion: </strong>To intervene nutrition support effectively, nutrition support nurses should have the qualification or competency through the education program based on their practice. Improved awareness of nutrition support nurses participating in research and quality improvement activity for role development is required.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"3"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9935079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9299559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}