Pub Date : 2023-01-01Epub Date: 2023-12-28DOI: 10.3352/jeehp.2023.20.39
Hyunju Lee, Soobin Park
Purpose: This study assessed the performance of 6 generative artificial intelligence (AI) platforms on the learning objectives of medical arthropodology in a parasitology class in Korea. We examined the AI platforms’ performance by querying in Korean and English to determine their information amount, accuracy, and relevance in prompts in both languages.
Methods: From December 15 to 17, 2023, 6 generative AI platforms—Bard, Bing, Claude, Clova X, GPT-4, and Wrtn—were tested on 7 medical arthropodology learning objectives in English and Korean. Clova X and Wrtn are platforms from Korean companies. Responses were evaluated using specific criteria for the English and Korean queries.
Results: Bard had abundant information but was fourth in accuracy and relevance. GPT-4, with high information content, ranked first in accuracy and relevance. Clova X was 4th in amount but 2nd in accuracy and relevance. Bing provided less information, with moderate accuracy and relevance. Wrtn’s answers were short, with average accuracy and relevance. Claude AI had reasonable information, but lower accuracy and relevance. The responses in English were superior in all aspects. Clova X was notably optimized for Korean, leading in relevance.
Conclusion: In a study of 6 generative AI platforms applied to medical arthropodology, GPT-4 excelled overall, while Clova X, a Korea-based AI product, achieved 100% relevance in Korean queries, the highest among its peers. Utilizing these AI platforms in classrooms improved the authors’ self-efficacy and interest in the subject, offering a positive experience of interacting with generative AI platforms to question and receive information.
目的:本研究旨在评估6种生成式人工智能(AIs)在韩国寄生虫学课堂上对医学节肢动物学学习目标的表现。我们通过韩语和英语的查询来考察人工智能的表现,以确定其在两种语言提示下的信息量、准确性和相关性:方法:2023 年 12 月 15 日至 17 日,6 个生成式人工智能(包括 Bard、Bing、Claude、Clova X、GPT-4 和 Wrtn)针对 7 个医学节肢动物学学习目标用英语和韩语进行了测试。Clova X 和 Wrtn 是韩国公司的平台。结果:结果:Bard 信息丰富,但在准确性和相关性方面排名第四。GPT-4 信息量大,在准确性和相关性方面排名第一。Clova X 信息量排名第四,但准确性和相关性排名第二。Bing 提供的信息量较少,准确性和相关性适中。Wrtn 的答案数据不足,准确性和相关性一般。Claude AI 提供了合理的信息,但准确性和相关性较低。英文版的回答在各方面都更胜一筹。Clova X 针对韩语进行了显著优化,在相关性方面遥遥领先:结论:在一项针对 6 个应用于医学节肢动物学的生成式人工智能的研究中,GPT-4 总体表现优异,而 Clova X(一个基于韩国的人工智能)在韩语查询中的相关性达到了 100%,是同类产品中最高的。在课堂上使用这些人工智能提高了作者的自我效能感和对该学科的兴趣,提供了与生成式人工智能互动提问和接收信息的积极体验。
{"title":"Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study","authors":"Hyunju Lee, Soobin Park","doi":"10.3352/jeehp.2023.20.39","DOIUrl":"10.3352/jeehp.2023.20.39","url":null,"abstract":"<p><strong>Purpose: </strong>This study assessed the performance of 6 generative artificial intelligence (AI) platforms on the learning objectives of medical arthropodology in a parasitology class in Korea. We examined the AI platforms’ performance by querying in Korean and English to determine their information amount, accuracy, and relevance in prompts in both languages.</p><p><strong>Methods: </strong>From December 15 to 17, 2023, 6 generative AI platforms—Bard, Bing, Claude, Clova X, GPT-4, and Wrtn—were tested on 7 medical arthropodology learning objectives in English and Korean. Clova X and Wrtn are platforms from Korean companies. Responses were evaluated using specific criteria for the English and Korean queries.</p><p><strong>Results: </strong>Bard had abundant information but was fourth in accuracy and relevance. GPT-4, with high information content, ranked first in accuracy and relevance. Clova X was 4th in amount but 2nd in accuracy and relevance. Bing provided less information, with moderate accuracy and relevance. Wrtn’s answers were short, with average accuracy and relevance. Claude AI had reasonable information, but lower accuracy and relevance. The responses in English were superior in all aspects. Clova X was notably optimized for Korean, leading in relevance.</p><p><strong>Conclusion: </strong>In a study of 6 generative AI platforms applied to medical arthropodology, GPT-4 excelled overall, while Clova X, a Korea-based AI product, achieved 100% relevance in Korean queries, the highest among its peers. Utilizing these AI platforms in classrooms improved the authors’ self-efficacy and interest in the subject, offering a positive experience of interacting with generative AI platforms to question and receive information.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"39"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139049470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-12-31DOI: 10.3352/jeehp.2023.20.40
Sun Huh
{"title":"Editorial policies of Journal of Educational Evaluation for Health Professions on the use of generative artificial intelligence in article writing and peer review","authors":"Sun Huh","doi":"10.3352/jeehp.2023.20.40","DOIUrl":"10.3352/jeehp.2023.20.40","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"40"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139058879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Coronavirus disease 2019 (COVID-19) has heavily impacted medical clinical education in Taiwan. Medical curricula have been altered to minimize exposure and limit transmission. This study investigated the effect of COVID-19 on Taiwanese medical students' clinical performance using online standardized evaluation systems and explored the factors influencing medical education during the pandemic.
Methods: Medical students were scored from 0 to 100 based on their clinical performance from 1/1/2018 to 6/31/2021. The students were placed into pre-COVID-19 (before 2/1/2020) and midst-COVID-19 (on and after 2/1/2020) groups. Each group was further categorized into COVID-19-affected specialties (pulmonary, infectious, and emergency medicine) and other specialties. Generalized estimating equations (GEEs) were used to compare and examine the effects of relevant variables on student performance.
Results: In total, 16,944 clinical scores were obtained for COVID-19-affected specialties and other specialties. For the COVID-19-affected specialties, the midst-COVID-19 score (88.513.52) was significantly lower than the pre-COVID-19 score (90.143.55) (P<0.0001). For the other specialties, the midst-COVID-19 score (88.323.68) was also significantly lower than the pre-COVID-19 score (90.063.58) (P<0.0001). There were 1,322 students (837 males and 485 females). Male students had significantly lower scores than female students (89.333.68 vs. 89.993.66, P=0.0017). GEE analysis revealed that the COVID-19 pandemic (unstandardized beta coefficient=-1.99, standard error [SE]=0.13, P<0.0001), COVID-19-affected specialties (B=0.26, SE=0.11, P=0.0184), female students (B=1.10, SE=0.20, P<0.0001), and female attending physicians (B=-0.19, SE=0.08, P=0.0145) were independently associated with students' scores.
Conclusion: COVID-19 negatively impacted medical students' clinical performance, regardless of their specialty. Female students outperformed male students, irrespective of the pandemic.
{"title":"Negative effects on medical students' scores for clinical performance during the COVID-19 pandemic in Taiwan: a comparative study.","authors":"Eunice Jia-Shiow Yuan, Shiau-Shian Huang, Chia-An Hsu, Jiing-Feng Lirng, Tzu-Hao Li, Chia-Chang Huang, Ying-Ying Yang, Chung-Pin Li, Chen-Huan Chen","doi":"10.3352/jeehp.2023.20.37","DOIUrl":"10.3352/jeehp.2023.20.37","url":null,"abstract":"<p><strong>Purpose: </strong>Coronavirus disease 2019 (COVID-19) has heavily impacted medical clinical education in Taiwan. Medical curricula have been altered to minimize exposure and limit transmission. This study investigated the effect of COVID-19 on Taiwanese medical students' clinical performance using online standardized evaluation systems and explored the factors influencing medical education during the pandemic.</p><p><strong>Methods: </strong>Medical students were scored from 0 to 100 based on their clinical performance from 1/1/2018 to 6/31/2021. The students were placed into pre-COVID-19 (before 2/1/2020) and midst-COVID-19 (on and after 2/1/2020) groups. Each group was further categorized into COVID-19-affected specialties (pulmonary, infectious, and emergency medicine) and other specialties. Generalized estimating equations (GEEs) were used to compare and examine the effects of relevant variables on student performance.</p><p><strong>Results: </strong>In total, 16,944 clinical scores were obtained for COVID-19-affected specialties and other specialties. For the COVID-19-affected specialties, the midst-COVID-19 score (88.513.52) was significantly lower than the pre-COVID-19 score (90.143.55) (P<0.0001). For the other specialties, the midst-COVID-19 score (88.323.68) was also significantly lower than the pre-COVID-19 score (90.063.58) (P<0.0001). There were 1,322 students (837 males and 485 females). Male students had significantly lower scores than female students (89.333.68 vs. 89.993.66, P=0.0017). GEE analysis revealed that the COVID-19 pandemic (unstandardized beta coefficient=-1.99, standard error [SE]=0.13, P<0.0001), COVID-19-affected specialties (B=0.26, SE=0.11, P=0.0184), female students (B=1.10, SE=0.20, P<0.0001), and female attending physicians (B=-0.19, SE=0.08, P=0.0145) were independently associated with students' scores.</p><p><strong>Conclusion: </strong>COVID-19 negatively impacted medical students' clinical performance, regardless of their specialty. Female students outperformed male students, irrespective of the pandemic.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"37"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10810719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139040715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-11-22DOI: 10.3352/jeehp.2023.20.31
Yoon Hee Kim, Bo Hyun Kim, Joonki Kim, Bokyoung Jung, Sangyoung Bae
Purpose: This study presents item analysis results of the 26 health personnel licensing examinations managed by the Korea Health Personnel Licensing Examination Institute (KHPLEI) in 2022.
Methods: The item difficulty index, item discrimination index, and reliability were calculated. The item discrimination index was calculated using a discrimination index based on the upper and lower 27% rule and the item-total correlation.
Results: Out of 468,352 total examinees, 418,887 (89.4%) passed. The pass rates ranged from 27.3% for health educators level 1 to 97.1% for oriental medical doctors. Most examinations had a high average difficulty index, albeit to varying degrees, ranging from 61.3% for prosthetists and orthotists to 83.9% for care workers. The average discrimination index based on the upper and lower 27% rule ranged from 0.17 for oriental medical doctors to 0.38 for radiological technologists. The average item-total correlation ranged from 0.20 for oriental medical doctors to 0.38 for radiological technologists. The Cronbach α, as a measure of reliability, ranged from 0.872 for health educators-level 3 to 0.978 for medical technologists. The correlation coefficient between the average difficulty index and average discrimination index was -0.2452 (P=0.1557), that between the average difficulty index and the average item-total correlation was 0.3502 (P=0.0392), and that between the average discrimination index and the average item-total correlation was 0.7944 (P<0.0001).
Conclusion: This technical report presents the item analysis results and reliability of the recent examinations by the KHPLEI, demonstrating an acceptable range of difficulty index and discrimination index values, as well as good reliability.
{"title":"Item difficulty index, discrimination index, and reliability of the 26 health professions licensing examinations in 2022, Korea: a psychometric study.","authors":"Yoon Hee Kim, Bo Hyun Kim, Joonki Kim, Bokyoung Jung, Sangyoung Bae","doi":"10.3352/jeehp.2023.20.31","DOIUrl":"10.3352/jeehp.2023.20.31","url":null,"abstract":"<p><strong>Purpose: </strong>This study presents item analysis results of the 26 health personnel licensing examinations managed by the Korea Health Personnel Licensing Examination Institute (KHPLEI) in 2022.</p><p><strong>Methods: </strong>The item difficulty index, item discrimination index, and reliability were calculated. The item discrimination index was calculated using a discrimination index based on the upper and lower 27% rule and the item-total correlation.</p><p><strong>Results: </strong>Out of 468,352 total examinees, 418,887 (89.4%) passed. The pass rates ranged from 27.3% for health educators level 1 to 97.1% for oriental medical doctors. Most examinations had a high average difficulty index, albeit to varying degrees, ranging from 61.3% for prosthetists and orthotists to 83.9% for care workers. The average discrimination index based on the upper and lower 27% rule ranged from 0.17 for oriental medical doctors to 0.38 for radiological technologists. The average item-total correlation ranged from 0.20 for oriental medical doctors to 0.38 for radiological technologists. The Cronbach α, as a measure of reliability, ranged from 0.872 for health educators-level 3 to 0.978 for medical technologists. The correlation coefficient between the average difficulty index and average discrimination index was -0.2452 (P=0.1557), that between the average difficulty index and the average item-total correlation was 0.3502 (P=0.0392), and that between the average discrimination index and the average item-total correlation was 0.7944 (P<0.0001).</p><p><strong>Conclusion: </strong>This technical report presents the item analysis results and reliability of the recent examinations by the KHPLEI, demonstrating an acceptable range of difficulty index and discrimination index values, as well as good reliability.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"31"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138292043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-17DOI: 10.3352/jeehp.2022.19.11
Dara Dasawulansari Syamsuri, Brahmana Askandar Tjokroprawiro, E. Kurniawati, Budi Utomo, D. Kuswanto
Purpose During the coronavirus disease 2019 (COVID-19) pandemic, the number of abdominal hysterectomy procedures decreased in Indonesia. The existing commercial abdominal hysterectomy simulation model is expensive and difficult to reuse. This study compared residents’ abdominal hysterectomy skills after simulation-based training using the Surabaya hysterectomy mannequin following a video demonstration. Methods We randomized 3rd- and 4th-year obstetrics and gynecology residents to a video-based group (group 1), a simulation-based group (group 2), and a combination group (group 3). Abdominal hysterectomy skills were compared between before and after the educational intervention. The pre- and post-tests were scored by blinded experts using the validated Objective Structured Assessment of Technical Skills (OSATS) and Global Rating Scale (GRS). Results A total of 33 residents were included in the pre- and post-tests. The OSATS and GRS mean differences after the intervention were higher in group 3 than in groups 1 and 2 (OSATS: 4.64 [95% confidence interval [CI], 2.90–6.37] vs. 2.55 [95% CI, 2.19–2.90] vs. 3.82 [95% CI, 2.41–5.22], P=0.047; GRS: 10.00 [95% CI, 7.01–12.99] vs. 5.18 [95% CI, 3.99–6.38] vs. 7.18 [95% CI, 6.11–8.26], P=0.006). The 3rd-year residents in group 3 had greater mean differences in OSATS and GRS scores than the 4th-year residents (OSATS: 5.67 [95% CI, 2.88–8.46]; GRS: 12.83 [95% CI, 8.61–17.05] vs. OSATS: 3.40 [95% CI, 0.83–5.97]; GRS: 5.67 [95% CI, 2.80–8.54]). Conclusion Simulation-based training using the Surabaya hysterectomy mannequin following video demonstration can be a bridge to learning about abdominal hysterectomy for residents who had less surgical experience during the COVID-19 pandemic.
{"title":"Simulation-based training using a novel Surabaya hysterectomy mannequin following video demonstration to improve abdominal hysterectomy skills of obstetrics and gynecology residents during the COVID-19 pandemic in Indonesia: a pre- and post-intervention study","authors":"Dara Dasawulansari Syamsuri, Brahmana Askandar Tjokroprawiro, E. Kurniawati, Budi Utomo, D. Kuswanto","doi":"10.3352/jeehp.2022.19.11","DOIUrl":"https://doi.org/10.3352/jeehp.2022.19.11","url":null,"abstract":"Purpose During the coronavirus disease 2019 (COVID-19) pandemic, the number of abdominal hysterectomy procedures decreased in Indonesia. The existing commercial abdominal hysterectomy simulation model is expensive and difficult to reuse. This study compared residents’ abdominal hysterectomy skills after simulation-based training using the Surabaya hysterectomy mannequin following a video demonstration. Methods We randomized 3rd- and 4th-year obstetrics and gynecology residents to a video-based group (group 1), a simulation-based group (group 2), and a combination group (group 3). Abdominal hysterectomy skills were compared between before and after the educational intervention. The pre- and post-tests were scored by blinded experts using the validated Objective Structured Assessment of Technical Skills (OSATS) and Global Rating Scale (GRS). Results A total of 33 residents were included in the pre- and post-tests. The OSATS and GRS mean differences after the intervention were higher in group 3 than in groups 1 and 2 (OSATS: 4.64 [95% confidence interval [CI], 2.90–6.37] vs. 2.55 [95% CI, 2.19–2.90] vs. 3.82 [95% CI, 2.41–5.22], P=0.047; GRS: 10.00 [95% CI, 7.01–12.99] vs. 5.18 [95% CI, 3.99–6.38] vs. 7.18 [95% CI, 6.11–8.26], P=0.006). The 3rd-year residents in group 3 had greater mean differences in OSATS and GRS scores than the 4th-year residents (OSATS: 5.67 [95% CI, 2.88–8.46]; GRS: 12.83 [95% CI, 8.61–17.05] vs. OSATS: 3.40 [95% CI, 0.83–5.97]; GRS: 5.67 [95% CI, 2.80–8.54]). Conclusion Simulation-based training using the Surabaya hysterectomy mannequin following video demonstration can be a bridge to learning about abdominal hysterectomy for residents who had less surgical experience during the COVID-19 pandemic.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"19 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41662217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-10DOI: 10.3352/jeehp.2022.19.10
Seung-Joo Na, Hyerin Roh, K. Chun, K. Park, Do-Hwan Kim
Purpose This study aimed to gather opinions from medical educators on the possibility of introducing an interview to the Korean Medical Licensing Examination (KMLE) to assess professional attributes. Specifically following topics were dealt with: the appropriate timing and tool to assess unprofessional conduct; the possiblity of prevention of unprofessional conduct by introducing an interview to the KMLE; and the possibility of implementation of an interview to the KMLE. Methods A cross-sectional study approach based on a survey questionnaire was adopted. We analyzed 104 pieces of news about doctors’ unprofessional conduct to determine the deficient professional attributes. We derived 24 items of unprofessional conduct and developed the questionnaire and surveyed 250 members of the Korean Society of Medical Education 2 times. Descriptive statistics, cross-tabulation analysis, and Fisher’s exact test were applied to the responses. The answers to the open-ended questions were analyzed using conventional content analysis. Results In the survey, 49 members (19.6%) responded. Out of 49, 24 (49.5%) responded in the 2nd survey. To assess unprofessional conduct, there was no dominant timing among basic medical education (BME), KMLE, and continuing professional development (CPD). There was no overwhelming assessment tool among written examination, objective structured clinical examination, practice observation, and interview. Response rates of “impossible” (49.0%) and “possible” (42.9%) suggested an interview of the KMLE prevented unprofessional conduct. In terms of implementation, “impossible” (50.0%) was selected more often than “possible” (33.3%). Conclusion Professional attributes should be assessed by various tools over the period from BME to CPD. Hence, it may be impossible to introduce an interview to assess professional attributes to the KMLE, and a system is needed such as self-regulation by the professional body rather than licensing examination.
{"title":"Is it possible to introduce an interview to the Korean Medical Licensing Examination to assess professional attributes?: a survey-based observational study","authors":"Seung-Joo Na, Hyerin Roh, K. Chun, K. Park, Do-Hwan Kim","doi":"10.3352/jeehp.2022.19.10","DOIUrl":"https://doi.org/10.3352/jeehp.2022.19.10","url":null,"abstract":"Purpose This study aimed to gather opinions from medical educators on the possibility of introducing an interview to the Korean Medical Licensing Examination (KMLE) to assess professional attributes. Specifically following topics were dealt with: the appropriate timing and tool to assess unprofessional conduct; the possiblity of prevention of unprofessional conduct by introducing an interview to the KMLE; and the possibility of implementation of an interview to the KMLE. Methods A cross-sectional study approach based on a survey questionnaire was adopted. We analyzed 104 pieces of news about doctors’ unprofessional conduct to determine the deficient professional attributes. We derived 24 items of unprofessional conduct and developed the questionnaire and surveyed 250 members of the Korean Society of Medical Education 2 times. Descriptive statistics, cross-tabulation analysis, and Fisher’s exact test were applied to the responses. The answers to the open-ended questions were analyzed using conventional content analysis. Results In the survey, 49 members (19.6%) responded. Out of 49, 24 (49.5%) responded in the 2nd survey. To assess unprofessional conduct, there was no dominant timing among basic medical education (BME), KMLE, and continuing professional development (CPD). There was no overwhelming assessment tool among written examination, objective structured clinical examination, practice observation, and interview. Response rates of “impossible” (49.0%) and “possible” (42.9%) suggested an interview of the KMLE prevented unprofessional conduct. In terms of implementation, “impossible” (50.0%) was selected more often than “possible” (33.3%). Conclusion Professional attributes should be assessed by various tools over the period from BME to CPD. Hence, it may be impossible to introduce an interview to assess professional attributes to the KMLE, and a system is needed such as self-regulation by the professional body rather than licensing examination.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44028791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Matthiesen, Michael S. Kelly, Kristina Dzara, A. S. Begin
Purpose Residents and attendings agree on the importance of feedback to resident education. However, while faculty report providing frequent feedback, residents often do not perceive receiving it, particularly in the context of teaching. Given the nuanced differences between feedback and teaching, we aimed to explore resident and attending perceptions of feedback and teaching in the clinical setting. Methods We conducted a qualitative study of internal medicine residents and attendings from December 2018 through March 2019 at the Massachusetts General Hospital to investigate perceptions of feedback in the inpatient clinical setting. Residents and faculty were recruited to participate in focus groups. Data were analyzed using thematic analysis to explore perspectives and barriers to feedback provision and identification. Results Five focus groups included 33 total participants in 3 attending (n=20) and 2 resident (n=13) groups. Thematic analysis of focus group transcripts identified 7 themes which organized into 3 thematic categories: (1) disentangling feedback and teaching, (2) delivering high-quality feedback, and (3) experiencing feedback in the group setting. Residents and attendings highlighted important themes in discriminating feedback from teaching. They indicated that while feedback is reactive in response to an action or behavior, teaching is proactive and oriented toward future endeavors. Conclusion Confusion between the critical concepts of teaching and feedback may be minimized by allowing them to each have their intended impact, either in response to prior events or aimed toward those yet to take place.
{"title":"Medical residents and attending physicians’ perceptions of feedback and teaching in the United States: a qualitative study","authors":"M. Matthiesen, Michael S. Kelly, Kristina Dzara, A. S. Begin","doi":"10.3352/jeehp.2022.19.9","DOIUrl":"https://doi.org/10.3352/jeehp.2022.19.9","url":null,"abstract":"Purpose Residents and attendings agree on the importance of feedback to resident education. However, while faculty report providing frequent feedback, residents often do not perceive receiving it, particularly in the context of teaching. Given the nuanced differences between feedback and teaching, we aimed to explore resident and attending perceptions of feedback and teaching in the clinical setting. Methods We conducted a qualitative study of internal medicine residents and attendings from December 2018 through March 2019 at the Massachusetts General Hospital to investigate perceptions of feedback in the inpatient clinical setting. Residents and faculty were recruited to participate in focus groups. Data were analyzed using thematic analysis to explore perspectives and barriers to feedback provision and identification. Results Five focus groups included 33 total participants in 3 attending (n=20) and 2 resident (n=13) groups. Thematic analysis of focus group transcripts identified 7 themes which organized into 3 thematic categories: (1) disentangling feedback and teaching, (2) delivering high-quality feedback, and (3) experiencing feedback in the group setting. Residents and attendings highlighted important themes in discriminating feedback from teaching. They indicated that while feedback is reactive in response to an action or behavior, teaching is proactive and oriented toward future endeavors. Conclusion Confusion between the critical concepts of teaching and feedback may be minimized by allowing them to each have their intended impact, either in response to prior events or aimed toward those yet to take place.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49037029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Wormley, W. Romney, Diana Veneri, Andrea Oberlander
Purpose Active video gaming (AVG) is used in physical therapy (PT) to treat individuals with a variety of diagnoses across the lifespan. The literature supports improvements in balance, cardiovascular endurance, and motor control; however, evidence is lacking regarding the implementation of AVG in PT education. This study investigated doctoral physical therapy (DPT) students’ confidence following active exploration of AVG systems as a PT intervention in the United States. Methods This pretest-posttest study included 60 DPT students in 2017 (cohort 1) and 55 students in 2018 (cohort 2) enrolled in a problem-based learning curriculum. AVG systems were embedded into patient cases and 2 interactive laboratory classes across 2 consecutive semesters (April–December 2017 and April–December 2018). Participants completed a 31-question survey before the intervention and 8 months later. Students’ confidence was rated for general use, game selection, plan of care, set-up, documentation, setting, and demographics. Descriptive statistics and the Wilcoxon signed-rank test were used to compare differences in confidence pre- and post-intervention. Results Both cohorts showed increased confidence at the post-test, with median (interquartile range) scores as follows: cohort 1: pre-test, 57.1 (44.3–63.5); post-test, 79.1 (73.1–85.4); and cohort 2: pre-test, 61.4 (48.0–70.7); post-test, 89.3 (80.0–93.2). Cohort 2 was significantly more confident at baseline than cohort 1 (P<0.05). In cohort 1, students’ data were paired and confidence levels significantly increased in all domains: use, Z=-6.2 (P<0.01); selection, Z=-5.9 (P<0.01); plan of care, Z=-6.0 (P<0.01); set-up, Z=-5.5 (P<0.01); documentation, Z=-6.0 (P<0.01); setting, Z=-6.3 (P<0.01); and total score, Z=-6.4 (P<0.01). Conclusion Structured, active experiences with AVG resulted in a significant increase in students’ confidence. As technology advances in healthcare delivery, it is essential to expose students to these technologies in the classroom.
{"title":"Doctoral physical therapy students’ increased confidence following exploration of active video gaming systems in a problem-based learning curriculum in the United States: a pre- and post-intervention study","authors":"M. Wormley, W. Romney, Diana Veneri, Andrea Oberlander","doi":"10.3352/jeehp.2022.19.7","DOIUrl":"https://doi.org/10.3352/jeehp.2022.19.7","url":null,"abstract":"Purpose Active video gaming (AVG) is used in physical therapy (PT) to treat individuals with a variety of diagnoses across the lifespan. The literature supports improvements in balance, cardiovascular endurance, and motor control; however, evidence is lacking regarding the implementation of AVG in PT education. This study investigated doctoral physical therapy (DPT) students’ confidence following active exploration of AVG systems as a PT intervention in the United States. Methods This pretest-posttest study included 60 DPT students in 2017 (cohort 1) and 55 students in 2018 (cohort 2) enrolled in a problem-based learning curriculum. AVG systems were embedded into patient cases and 2 interactive laboratory classes across 2 consecutive semesters (April–December 2017 and April–December 2018). Participants completed a 31-question survey before the intervention and 8 months later. Students’ confidence was rated for general use, game selection, plan of care, set-up, documentation, setting, and demographics. Descriptive statistics and the Wilcoxon signed-rank test were used to compare differences in confidence pre- and post-intervention. Results Both cohorts showed increased confidence at the post-test, with median (interquartile range) scores as follows: cohort 1: pre-test, 57.1 (44.3–63.5); post-test, 79.1 (73.1–85.4); and cohort 2: pre-test, 61.4 (48.0–70.7); post-test, 89.3 (80.0–93.2). Cohort 2 was significantly more confident at baseline than cohort 1 (P<0.05). In cohort 1, students’ data were paired and confidence levels significantly increased in all domains: use, Z=-6.2 (P<0.01); selection, Z=-5.9 (P<0.01); plan of care, Z=-6.0 (P<0.01); set-up, Z=-5.5 (P<0.01); documentation, Z=-6.0 (P<0.01); setting, Z=-6.3 (P<0.01); and total score, Z=-6.4 (P<0.01). Conclusion Structured, active experiences with AVG resulted in a significant increase in students’ confidence. As technology advances in healthcare delivery, it is essential to expose students to these technologies in the classroom.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42336757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Warren Wiechmann, R. A. Edwards, Cheyenne Low, A. Wray, Megan Boysen-Osborn, Shannon L. Toohey
Purpose Technological advances are changing how students approach learning. The traditional note-taking methods of longhand writing have been supplemented and replaced by tablets, smartphones, and laptop note-taking. It has been theorized that writing notes by hand requires more complex cognitive processes and may lead to better retention. However, few studies have investigated the use of tablet-based note-taking, which allows the incorporation of typing, drawing, highlights, and media. We therefore sought to confirm the hypothesis that tablet-based note-taking would lead to equivalent or better recall as compared to written note-taking. Methods We allocated 68 students into longhand, laptop, or tablet note-taking groups, and they watched and took notes on a presentation on which they were assessed for factual and conceptual recall. A second short distractor video was shown, followed by a 30-minute assessment at the University of California, Irvine campus, over a single day period in August 2018. Notes were analyzed for content, supplemental drawings, and other media sources. Results No significant difference was found in the factual or conceptual recall scores for tablet, laptop, and handwritten note-taking (P=0.61). The median word count was 131.5 for tablets, 121.0 for handwriting, and 297.0 for laptops (P=0.01). The tablet group had the highest presence of drawing, highlighting, and other media/tools. Conclusion In light of conflicting research regarding the best note-taking method, our study showed that longhand note-taking is not superior to tablet or laptop note-taking. This suggests students should be encouraged to pick the note-taking method that appeals most to them. In the future, traditional note-taking may be replaced or supplemented with digital technologies that provide similar efficacy with more convenience.
{"title":"No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study","authors":"Warren Wiechmann, R. A. Edwards, Cheyenne Low, A. Wray, Megan Boysen-Osborn, Shannon L. Toohey","doi":"10.3352/jeehp.2022.19.8","DOIUrl":"https://doi.org/10.3352/jeehp.2022.19.8","url":null,"abstract":"Purpose Technological advances are changing how students approach learning. The traditional note-taking methods of longhand writing have been supplemented and replaced by tablets, smartphones, and laptop note-taking. It has been theorized that writing notes by hand requires more complex cognitive processes and may lead to better retention. However, few studies have investigated the use of tablet-based note-taking, which allows the incorporation of typing, drawing, highlights, and media. We therefore sought to confirm the hypothesis that tablet-based note-taking would lead to equivalent or better recall as compared to written note-taking. Methods We allocated 68 students into longhand, laptop, or tablet note-taking groups, and they watched and took notes on a presentation on which they were assessed for factual and conceptual recall. A second short distractor video was shown, followed by a 30-minute assessment at the University of California, Irvine campus, over a single day period in August 2018. Notes were analyzed for content, supplemental drawings, and other media sources. Results No significant difference was found in the factual or conceptual recall scores for tablet, laptop, and handwritten note-taking (P=0.61). The median word count was 131.5 for tablets, 121.0 for handwriting, and 297.0 for laptops (P=0.01). The tablet group had the highest presence of drawing, highlighting, and other media/tools. Conclusion In light of conflicting research regarding the best note-taking method, our study showed that longhand note-taking is not superior to tablet or laptop note-taking. This suggests students should be encouraged to pick the note-taking method that appeals most to them. In the future, traditional note-taking may be replaced or supplemented with digital technologies that provide similar efficacy with more convenience.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48359456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flipped classroom models encourage student autonomy and reverse the order of traditional classroom content such as lectures and assignments. Virtual learning environments are ideal for executing flipped classroom models to improve critical thinking skills. This paper provides health professions faculty with guidance on developing a virtual flipped classroom in online graduate nutrition courses between September 2021 and January 2022 at the School of Health Professions, Rutgers The State University of New Jersey. Examples of pre-class, live virtual face-to-face, and post-class activities are provided. Active learning, immediate feedback, and enhanced student engagement in a flipped classroom may result in a more thorough synthesis of information, resulting in increased critical thinking skills. This article describes how a flipped classroom model design in graduate online courses that incorporate virtual face-to-face class sessions in a virtual learning environment can be utilized to promote critical thinking skills. Health professions faculty who teach online can apply the examples discussed to their online courses.
{"title":"Using a virtual flipped classroom model to promote critical thinking in online graduate courses in the United States: a case presentation","authors":"J. Tomesko, Deborah Cohen, J. Bridenbaugh","doi":"10.3352/jeehp.2022.19.5","DOIUrl":"https://doi.org/10.3352/jeehp.2022.19.5","url":null,"abstract":"Flipped classroom models encourage student autonomy and reverse the order of traditional classroom content such as lectures and assignments. Virtual learning environments are ideal for executing flipped classroom models to improve critical thinking skills. This paper provides health professions faculty with guidance on developing a virtual flipped classroom in online graduate nutrition courses between September 2021 and January 2022 at the School of Health Professions, Rutgers The State University of New Jersey. Examples of pre-class, live virtual face-to-face, and post-class activities are provided. Active learning, immediate feedback, and enhanced student engagement in a flipped classroom may result in a more thorough synthesis of information, resulting in increased critical thinking skills. This article describes how a flipped classroom model design in graduate online courses that incorporate virtual face-to-face class sessions in a virtual learning environment can be utilized to promote critical thinking skills. Health professions faculty who teach online can apply the examples discussed to their online courses.","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43820394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}