Michael James Weightman, Anna Chur-Hansen, Scott Richard Clark
Background: Artificial intelligence (AI) is rapidly changing both clinical psychiatry and the education of medical professionals. However, little is currently known about how AI is being discussed in the education and training of psychiatry for medical students and doctors around the world.
Objective: This paper aims to provide a snapshot of the available data on this subject as of 2024. A deliberately broad definition of AI was adopted to capture the widest range of relevant literature and applications, including machine learning, natural language processing, and generative AI tools.
Methods: A scoping review was conducted using both peer-reviewed publications from PubMed, Embase, PsycINFO, and Scopus databases, and gray literature sources. The criterion for inclusion was a description of how AI could be applied to education or training in psychiatry.
Results: A total of 26 records published between 2016 and 2024 were included. The key themes identified were (1) the imperative for an AI curriculum for students or doctors training in psychiatry, (2) uses of AI to develop educational resources, (3) uses of AI to develop clinical skills, (4) uses of AI for assessments, (5) academic integrity or ethical considerations surrounding the use of AI, and (6) tensions relating to competing priorities and directions.
Conclusions: Although a nascent field, it is clear that AI will increasingly impact assessment, clinical skills training, and the development of teaching resources in psychiatry. Training curricula will need to reflect the new knowledge and skills required for future clinical practice. Educators will need to be mindful of academic integrity risks and to emphasize development of critical thinking skills. Attitudes of psychiatrists toward the rise of AI in training remain underexplored.
{"title":"AI in Psychiatric Education and Training From 2016 to 2024: Scoping Review of Trends.","authors":"Michael James Weightman, Anna Chur-Hansen, Scott Richard Clark","doi":"10.2196/81517","DOIUrl":"10.2196/81517","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is rapidly changing both clinical psychiatry and the education of medical professionals. However, little is currently known about how AI is being discussed in the education and training of psychiatry for medical students and doctors around the world.</p><p><strong>Objective: </strong>This paper aims to provide a snapshot of the available data on this subject as of 2024. A deliberately broad definition of AI was adopted to capture the widest range of relevant literature and applications, including machine learning, natural language processing, and generative AI tools.</p><p><strong>Methods: </strong>A scoping review was conducted using both peer-reviewed publications from PubMed, Embase, PsycINFO, and Scopus databases, and gray literature sources. The criterion for inclusion was a description of how AI could be applied to education or training in psychiatry.</p><p><strong>Results: </strong>A total of 26 records published between 2016 and 2024 were included. The key themes identified were (1) the imperative for an AI curriculum for students or doctors training in psychiatry, (2) uses of AI to develop educational resources, (3) uses of AI to develop clinical skills, (4) uses of AI for assessments, (5) academic integrity or ethical considerations surrounding the use of AI, and (6) tensions relating to competing priorities and directions.</p><p><strong>Conclusions: </strong>Although a nascent field, it is clear that AI will increasingly impact assessment, clinical skills training, and the development of teaching resources in psychiatry. Training curricula will need to reflect the new knowledge and skills required for future clinical practice. Educators will need to be mindful of academic integrity risks and to emphasize development of critical thinking skills. Attitudes of psychiatrists toward the rise of AI in training remain underexplored.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e81517"},"PeriodicalIF":3.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12755346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>While artificial intelligence (AI)-generated feedback offers significant potential to overcome constraints on faculty time and resources associated with providing personalized feedback, its perceived usefulness can be undermined by algorithm aversion. In-context learning, particularly the few-shot approach, has emerged as a promising paradigm for enhancing AI performance. However, there is limited research investigating its usefulness, especially in health profession education.</p><p><strong>Objective: </strong>This study aimed to compare the quality of AI-generated formative feedback from 2 settings, feedback generated in a zero-shot setting (hereafter, "zero-shot feedback") and feedback generated in a few-shot setting (hereafter, "few-shot feedback"), using a mixed methods approach in Japanese physical therapy education. Additionally, we examined the effect of algorithm aversion on these 2 feedback types.</p><p><strong>Methods: </strong>A mixed methods study was conducted with 35 fourth-year physical therapy students (mean age 21.4, SD 0.7 years). Zero-shot feedback was created using Gemini 2.5 Pro with default settings, whereas few-shot feedback was generated by providing the same model with 9 teacher-created examples. The participants compared the quality of both feedback types using 3 methods: a direct preference question, the Feedback Perceptions Questionnaire (FPQ), and focus group interviews. Quantitative comparisons of FPQ scores were performed using the Wilcoxon signed rank test. To investigate algorithm aversion, the study examined how student perceptions changed before and after disclosure of the feedback's identity.</p><p><strong>Results: </strong>Most students (26/35, 74%) preferred few-shot feedback over zero-shot feedback in terms of overall usefulness, although no significant difference was found between the 2 feedback types for the total FPQ score (P=.22). On the specific FPQ scales, few-shot feedback scored significantly higher than zero-shot feedback on fairness across all 3 items: "satisfied" (P=.02; r=0.407), "fair" (P=.04; r=0.341), and "justified" (P=.02; r=0.392). It also scored significantly higher on 1 item of the usefulness scale ("useful"; P=.02; r=0.401) and 1 item of the willingness scale ("invest a lot of effort"; P=.02; r=0.394). In contrast, zero-shot feedback scored significantly higher on the affect scale across 2 items: "successful" (P=.03; r=0.365) and "angry" (P=.008; r=0.443). Regarding algorithm aversion, evaluations for zero-shot feedback became more negative for 83% (15/18) of the items after identity disclosure, whereas positive perceptions of few-shot feedback were maintained or increased. Qualitative analysis revealed that students valued zero-shot feedback for its encouraging tone, whereas few-shot feedback was appreciated for its contextual understanding and concrete guidance for improvement.</p><p><strong>Conclusions: </strong>Japanese physical therapy students perce
{"title":"Evaluation of Few-Shot AI-Generated Feedback on Case Reports in Physical Therapy Education: Mixed Methods Study.","authors":"Hisaya Sudo, Yoko Noborimoto, Jun Takahashi","doi":"10.2196/85614","DOIUrl":"10.2196/85614","url":null,"abstract":"<p><strong>Background: </strong>While artificial intelligence (AI)-generated feedback offers significant potential to overcome constraints on faculty time and resources associated with providing personalized feedback, its perceived usefulness can be undermined by algorithm aversion. In-context learning, particularly the few-shot approach, has emerged as a promising paradigm for enhancing AI performance. However, there is limited research investigating its usefulness, especially in health profession education.</p><p><strong>Objective: </strong>This study aimed to compare the quality of AI-generated formative feedback from 2 settings, feedback generated in a zero-shot setting (hereafter, \"zero-shot feedback\") and feedback generated in a few-shot setting (hereafter, \"few-shot feedback\"), using a mixed methods approach in Japanese physical therapy education. Additionally, we examined the effect of algorithm aversion on these 2 feedback types.</p><p><strong>Methods: </strong>A mixed methods study was conducted with 35 fourth-year physical therapy students (mean age 21.4, SD 0.7 years). Zero-shot feedback was created using Gemini 2.5 Pro with default settings, whereas few-shot feedback was generated by providing the same model with 9 teacher-created examples. The participants compared the quality of both feedback types using 3 methods: a direct preference question, the Feedback Perceptions Questionnaire (FPQ), and focus group interviews. Quantitative comparisons of FPQ scores were performed using the Wilcoxon signed rank test. To investigate algorithm aversion, the study examined how student perceptions changed before and after disclosure of the feedback's identity.</p><p><strong>Results: </strong>Most students (26/35, 74%) preferred few-shot feedback over zero-shot feedback in terms of overall usefulness, although no significant difference was found between the 2 feedback types for the total FPQ score (P=.22). On the specific FPQ scales, few-shot feedback scored significantly higher than zero-shot feedback on fairness across all 3 items: \"satisfied\" (P=.02; r=0.407), \"fair\" (P=.04; r=0.341), and \"justified\" (P=.02; r=0.392). It also scored significantly higher on 1 item of the usefulness scale (\"useful\"; P=.02; r=0.401) and 1 item of the willingness scale (\"invest a lot of effort\"; P=.02; r=0.394). In contrast, zero-shot feedback scored significantly higher on the affect scale across 2 items: \"successful\" (P=.03; r=0.365) and \"angry\" (P=.008; r=0.443). Regarding algorithm aversion, evaluations for zero-shot feedback became more negative for 83% (15/18) of the items after identity disclosure, whereas positive perceptions of few-shot feedback were maintained or increased. Qualitative analysis revealed that students valued zero-shot feedback for its encouraging tone, whereas few-shot feedback was appreciated for its contextual understanding and concrete guidance for improvement.</p><p><strong>Conclusions: </strong>Japanese physical therapy students perce","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e85614"},"PeriodicalIF":3.2,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811036/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145865326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taiki W Nishihara, Fritz Gerald P Kalaw, Adelle Engmann, Aya Motoyoshi, Paapa Mensah-Kane, Deepa Gupta, Victoria Patronilo, Linda M Zangwill, Shahin Hallaj, Amirhossein Panahi, Garrison W Cottrell, Bradley Voytek, Virginia R de Sa, Sally L Baxter
<p><strong>Background: </strong>The integration of artificial intelligence (AI) and machine learning (ML) into biomedical research requires a workforce fluent in both computational methods and clinical applications. Structured, interdisciplinary training opportunities remain limited, creating a gap between data scientists and clinicians. The National Institutes of Health's Bridge to Artificial Intelligence (Bridge2AI) initiative launched the Artificial Intelligence-Ready and Exploratory Atlas for Diabetes Insights (AI-READI) data generation project to address this gap. AI-READI is creating a multimodal, FAIR (findable, accessible, interoperable, and reusable) dataset-including ophthalmic imaging, physiologic measurements, wearable sensor data, and survey responses-from approximately 4000 participants with or at risk for type 2 diabetes. In parallel, AI-READI established a year-long mentored research program that begins with a 2-week immersive summer bootcamp to provide foundational AI/ML skills grounded in domain-relevant biomedical data.</p><p><strong>Objective: </strong>To describe the design, iterative refinement, and outcomes of the AI-READI Bootcamp, and to share lessons for creating future multidisciplinary AI/ML training programs in biomedical research.</p><p><strong>Methods: </strong>Held annually at the University of California San Diego, the bootcamp combines 80 hours of lectures, coding sessions, and small-group mentorship. Year 1 introduced Python programming, classical ML techniques (eg, logistic regression, convolutional neural networks), and data science methods, such as principal component analysis and clustering, using public datasets. In Year 2, the curriculum was refined based on structured participant feedback-reducing cohort size to increase individualized mentorship, integrating the AI-READI dataset (including retinal images and structured clinical variables), and adding modules on large language models and FAIR data principles. Participant characteristics and satisfaction were assessed through standardized pre- and postbootcamp surveys, and qualitative feedback was analyzed thematically by independent coders.</p><p><strong>Results: </strong>Seventeen participants attended Year 1 and 7 attended Year 2, with an instructor-to-student ratio of approximately 1:2 in the latter. Across both years, postbootcamp evaluations indicated high satisfaction, with Year 2 participants reporting improved experiences due to smaller cohorts, earlier integration of the AI-READI dataset, and greater emphasis on applied learning. In Year 2, mean scores for instructor effectiveness, staff support, and overall enjoyment were perfect (5.00/5.00). Qualitative feedback emphasized the value of working with domain-relevant, multimodal datasets; the benefits of peer collaboration; and the applicability of skills to structured research projects during the subsequent internship year.</p><p><strong>Conclusions: </strong>The AI-READI Bootcamp illustrates how
{"title":"Fostering Multidisciplinary Collaboration in Artificial Intelligence and Machine Learning Education: Tutorial Based on the AI-READI Bootcamp.","authors":"Taiki W Nishihara, Fritz Gerald P Kalaw, Adelle Engmann, Aya Motoyoshi, Paapa Mensah-Kane, Deepa Gupta, Victoria Patronilo, Linda M Zangwill, Shahin Hallaj, Amirhossein Panahi, Garrison W Cottrell, Bradley Voytek, Virginia R de Sa, Sally L Baxter","doi":"10.2196/83154","DOIUrl":"10.2196/83154","url":null,"abstract":"<p><strong>Background: </strong>The integration of artificial intelligence (AI) and machine learning (ML) into biomedical research requires a workforce fluent in both computational methods and clinical applications. Structured, interdisciplinary training opportunities remain limited, creating a gap between data scientists and clinicians. The National Institutes of Health's Bridge to Artificial Intelligence (Bridge2AI) initiative launched the Artificial Intelligence-Ready and Exploratory Atlas for Diabetes Insights (AI-READI) data generation project to address this gap. AI-READI is creating a multimodal, FAIR (findable, accessible, interoperable, and reusable) dataset-including ophthalmic imaging, physiologic measurements, wearable sensor data, and survey responses-from approximately 4000 participants with or at risk for type 2 diabetes. In parallel, AI-READI established a year-long mentored research program that begins with a 2-week immersive summer bootcamp to provide foundational AI/ML skills grounded in domain-relevant biomedical data.</p><p><strong>Objective: </strong>To describe the design, iterative refinement, and outcomes of the AI-READI Bootcamp, and to share lessons for creating future multidisciplinary AI/ML training programs in biomedical research.</p><p><strong>Methods: </strong>Held annually at the University of California San Diego, the bootcamp combines 80 hours of lectures, coding sessions, and small-group mentorship. Year 1 introduced Python programming, classical ML techniques (eg, logistic regression, convolutional neural networks), and data science methods, such as principal component analysis and clustering, using public datasets. In Year 2, the curriculum was refined based on structured participant feedback-reducing cohort size to increase individualized mentorship, integrating the AI-READI dataset (including retinal images and structured clinical variables), and adding modules on large language models and FAIR data principles. Participant characteristics and satisfaction were assessed through standardized pre- and postbootcamp surveys, and qualitative feedback was analyzed thematically by independent coders.</p><p><strong>Results: </strong>Seventeen participants attended Year 1 and 7 attended Year 2, with an instructor-to-student ratio of approximately 1:2 in the latter. Across both years, postbootcamp evaluations indicated high satisfaction, with Year 2 participants reporting improved experiences due to smaller cohorts, earlier integration of the AI-READI dataset, and greater emphasis on applied learning. In Year 2, mean scores for instructor effectiveness, staff support, and overall enjoyment were perfect (5.00/5.00). Qualitative feedback emphasized the value of working with domain-relevant, multimodal datasets; the benefits of peer collaboration; and the applicability of skills to structured research projects during the subsequent internship year.</p><p><strong>Conclusions: </strong>The AI-READI Bootcamp illustrates how","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e83154"},"PeriodicalIF":3.2,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747659/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145858242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Xu, Jihong Sha, Song Jia, Jiao Li, Lei Xu, Zhihua Shao
<p><strong>Background: </strong>Technological innovation is reshaping the landscape of medical education, bringing revolutionary changes to traditional teaching methods. In this context, the upgrade of the teaching model for microscopy, as one of the core skills in medical education, is particularly important. Proficiency in microscope operation not only affects medical students' pathology diagnosis abilities but also directly impacts the precision of surgical procedures and laboratory analysis skills. However, current microscopy pedagogy faces dual challenges: on one hand, traditional teaching lacks real-time image sharing capabilities, severely limiting the effectiveness of immediate instructor guidance; on the other hand, students find it difficult to independently identify technical flaws in their operations, leading to inefficient skill acquisition. Although whole-slide imaging-based microscopy system technology has partially addressed the issue of image visualization, it cannot replicate the tactile feedback and physical interaction experience of the real world. The breakthrough development of 5G communication technology-with its ultrahigh transmission speed and ultralow latency-provides an innovative solution to this teaching challenge. Leveraging this technological advantage, Tongji University's biology laboratory has pioneered the deployment of a 5G local area network (LAN)-supported digital interactive microscopy system, creating a new model for microscopy education.</p><p><strong>Objective: </strong>This study aims to investigate the efficacy of an innovative 5G LAN-powered interactive digital microscopy system in enhancing microscopy training efficiency, evaluated through medical students' academic performance and learning experience.</p><p><strong>Methods: </strong>Using a quasi-experimental design, we quantify system effectiveness via academic performance metrics and learning experience dimensions. A total of 39 students enrolled in the biology course were randomly assigned to 2 groups: one using traditional optical microscopes (control) and the other using the digital microscopy interactive system (DMIS). Their academic performance was evaluated through a knowledge test and 3 laboratory reports. A 5-point Likert-scale questionnaire was used to gather feedback on students' learning experiences. In addition, the DMIS group was required to evaluate the specific functions of the system.</p><p><strong>Results: </strong>In the knowledge test, no statistical difference was found between the 2 groups; however, the DMIS group scored significantly higher in Lecture 2 (P<.05). In the laboratory reports, the DMIS group performed significantly better than the control group (mean 90.33, SD 2.63 vs mean 80.53, SD 3.52, P<.001). Questionnaire results indicated that the DMIS group has a positive evaluation of the system and expressed greater confidence in its future application. For the evaluation of the laboratory lectures, the DMIS group received
{"title":"Effectiveness of a 5G Local Area Network-Based Digital Microscopy Interactive System: Quasi-Experimental Design.","authors":"Jie Xu, Jihong Sha, Song Jia, Jiao Li, Lei Xu, Zhihua Shao","doi":"10.2196/70256","DOIUrl":"10.2196/70256","url":null,"abstract":"<p><strong>Background: </strong>Technological innovation is reshaping the landscape of medical education, bringing revolutionary changes to traditional teaching methods. In this context, the upgrade of the teaching model for microscopy, as one of the core skills in medical education, is particularly important. Proficiency in microscope operation not only affects medical students' pathology diagnosis abilities but also directly impacts the precision of surgical procedures and laboratory analysis skills. However, current microscopy pedagogy faces dual challenges: on one hand, traditional teaching lacks real-time image sharing capabilities, severely limiting the effectiveness of immediate instructor guidance; on the other hand, students find it difficult to independently identify technical flaws in their operations, leading to inefficient skill acquisition. Although whole-slide imaging-based microscopy system technology has partially addressed the issue of image visualization, it cannot replicate the tactile feedback and physical interaction experience of the real world. The breakthrough development of 5G communication technology-with its ultrahigh transmission speed and ultralow latency-provides an innovative solution to this teaching challenge. Leveraging this technological advantage, Tongji University's biology laboratory has pioneered the deployment of a 5G local area network (LAN)-supported digital interactive microscopy system, creating a new model for microscopy education.</p><p><strong>Objective: </strong>This study aims to investigate the efficacy of an innovative 5G LAN-powered interactive digital microscopy system in enhancing microscopy training efficiency, evaluated through medical students' academic performance and learning experience.</p><p><strong>Methods: </strong>Using a quasi-experimental design, we quantify system effectiveness via academic performance metrics and learning experience dimensions. A total of 39 students enrolled in the biology course were randomly assigned to 2 groups: one using traditional optical microscopes (control) and the other using the digital microscopy interactive system (DMIS). Their academic performance was evaluated through a knowledge test and 3 laboratory reports. A 5-point Likert-scale questionnaire was used to gather feedback on students' learning experiences. In addition, the DMIS group was required to evaluate the specific functions of the system.</p><p><strong>Results: </strong>In the knowledge test, no statistical difference was found between the 2 groups; however, the DMIS group scored significantly higher in Lecture 2 (P<.05). In the laboratory reports, the DMIS group performed significantly better than the control group (mean 90.33, SD 2.63 vs mean 80.53, SD 3.52, P<.001). Questionnaire results indicated that the DMIS group has a positive evaluation of the system and expressed greater confidence in its future application. For the evaluation of the laboratory lectures, the DMIS group received","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e70256"},"PeriodicalIF":3.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12780701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel García-Torres, César Fernández, José Joaquín Mira, Alexandra Morales, María Asunción Vicente
<p><strong>Background: </strong>Virtual simulated patients (VSPs) powered by generative artificial intelligence (GAI) offer a promising tool for training clinical interviewing skills; yet, little is known about how different system- and user-level variables shape students' perceptions of these interactions.</p><p><strong>Objective: </strong>We aim to study psychology students' perceptions of GAI-driven VSPs and examine how demographic factors, system parameters, and interaction characteristics influence such perceptions.</p><p><strong>Methods: </strong>We conducted a total of 1832 recorded interactions involving 156 psychology students with 13 GAI-generated VSPs configured with varying temperature settings (0.1, 0.5, 0.9). For each student, we collected age and sex; for each interview, we recorded interview length (total number of question-answer turns), number of connectivity failures, the specific VSP consulted, and the model temperature. After every interview, students provided a 1-10 global rating and open-ended comments regarding strengths and areas for improvement. At the end of the training sequence, they also reported perceived improvement in diagnostic ability. Statistical analyses assessed the influence of different variables on global ratings: demographics, interaction-level data, and GAI temperature setting. Sentiment analysis was conducted to evaluate the VSPs' clinical realism.</p><p><strong>Results: </strong>Statistical analysis showed that female students rated the tool significantly higher (mean rating 9.25/10) than male students (mean rating 8.94/10; Kruskal-Wallis test, H=8.7; P=.003). On the other side, no significant correlation was found between global rating and age (r=0.02, 95% CI -0.03 to 0.06; P=.42), interview length (r=0.04, 95% CI -0.2 to 0.10; P=.18), or frequency of participation (Kruskal-Wallis test, H=4.62; P=.20). A moderate negative correlation emerged between connectivity failures and ratings (r=-0.26, 95% CI -0.41 to -0.10; P=.002). Temperature settings significantly influenced ratings (Kruskal-Wallis test, H=6.93; P=.03; η²=0.02), with higher scores at temperature 0.9 compared with 0.1 (Dunn's test, P=.04). Concerning learning outcomes, self-perceived improvement in diagnostic ability was reported by 94% (94/100) of students; however, final practical examination scores (mean 6.67, SD 1.42) did not differ significantly from those of the previous cohort without VSP training (mean 6.42, SD 1.56). Sentiment analysis indicated predominantly negative sentiment in GAI responses (median negativity 0.8903, IQR 0.306-0.961), consistent with clinical realism.</p><p><strong>Conclusions: </strong>GAI-driven VSPs were well-received by psychology students, with student gender and system-level variables (particularly temperature settings and connection stability) shaping user evaluations. Although participants perceived the training as beneficial for their diagnostic skills, objective examination performance did not signific
背景:由生成式人工智能(GAI)驱动的虚拟模拟患者(vsp)为临床访谈技能培训提供了一个很有前途的工具;然而,对于不同的系统和用户层面的变量如何影响学生对这些互动的看法,我们知之甚少。目的:研究心理学专业学生对人工智能驱动vsp的看法,并研究人口因素、系统参数和交互特征如何影响这种看法。方法:我们对156名心理学学生进行了1832次互动记录,他们使用了13个由ai生成的vsp,设置了不同的温度设置(0.1,0.5,0.9)。我们收集了每个学生的年龄和性别;对于每次访谈,我们记录了访谈长度(问答回合总数)、连接失败次数、咨询的特定VSP和模型温度。每次面试结束后,学生们提供1-10分的全球评分,并就自己的优势和需要改进的地方发表开放式评论。在训练结束时,他们也报告了诊断能力的改善。统计分析评估了不同变量对全球评级的影响:人口统计、互动水平数据和GAI温度设置。采用情绪分析评价vsp的临床现实性。结果:统计分析显示,女生对该工具的评分(平均评分9.25/10)显著高于男生(平均评分8.94/10);Kruskal-Wallis检验,H=8.7; P= 0.003)。另一方面,总体评分与年龄(r=0.02, 95% CI -0.03至0.06;P= 0.42)、访谈长度(r=0.04, 95% CI -0.2至0.10;P= 0.18)或参与频率(Kruskal-Wallis检验,H=4.62; P= 0.20)之间无显著相关性。连接失败和评分之间存在适度的负相关(r=-0.26, 95% CI -0.41至-0.10;P= 0.002)。温度设置显著影响评分(Kruskal-Wallis测试,H=6.93; P= 0.03; η²=0.02),温度为0.9的评分高于温度为0.1的评分(Dunn测试,P= 0.04)。在学习成果方面,94%(94/100)的学生自我感知诊断能力有所提高;然而,最后的实践考试成绩(平均6.67,SD 1.42)与未接受VSP训练的队列(平均6.42,SD 1.56)没有显著差异。情绪分析显示GAI反应以消极情绪为主(中位数负性0.8903,IQR为0.306-0.961),与临床现实相符。结论:ai驱动的vsp受到心理学学生的欢迎,学生的性别和系统级变量(特别是温度设置和连接稳定性)影响了用户的评价。尽管参与者认为培训对他们的诊断技能有益,但客观考试成绩与前一队列没有显着差异。然而,缺乏随机化限制了所得结果的泛化,需要进一步的实验。
{"title":"Using AI-Based Virtual Simulated Patients for Training in Psychopathological Interviewing: Cross-Sectional Observational Study.","authors":"Daniel García-Torres, César Fernández, José Joaquín Mira, Alexandra Morales, María Asunción Vicente","doi":"10.2196/78857","DOIUrl":"10.2196/78857","url":null,"abstract":"<p><strong>Background: </strong>Virtual simulated patients (VSPs) powered by generative artificial intelligence (GAI) offer a promising tool for training clinical interviewing skills; yet, little is known about how different system- and user-level variables shape students' perceptions of these interactions.</p><p><strong>Objective: </strong>We aim to study psychology students' perceptions of GAI-driven VSPs and examine how demographic factors, system parameters, and interaction characteristics influence such perceptions.</p><p><strong>Methods: </strong>We conducted a total of 1832 recorded interactions involving 156 psychology students with 13 GAI-generated VSPs configured with varying temperature settings (0.1, 0.5, 0.9). For each student, we collected age and sex; for each interview, we recorded interview length (total number of question-answer turns), number of connectivity failures, the specific VSP consulted, and the model temperature. After every interview, students provided a 1-10 global rating and open-ended comments regarding strengths and areas for improvement. At the end of the training sequence, they also reported perceived improvement in diagnostic ability. Statistical analyses assessed the influence of different variables on global ratings: demographics, interaction-level data, and GAI temperature setting. Sentiment analysis was conducted to evaluate the VSPs' clinical realism.</p><p><strong>Results: </strong>Statistical analysis showed that female students rated the tool significantly higher (mean rating 9.25/10) than male students (mean rating 8.94/10; Kruskal-Wallis test, H=8.7; P=.003). On the other side, no significant correlation was found between global rating and age (r=0.02, 95% CI -0.03 to 0.06; P=.42), interview length (r=0.04, 95% CI -0.2 to 0.10; P=.18), or frequency of participation (Kruskal-Wallis test, H=4.62; P=.20). A moderate negative correlation emerged between connectivity failures and ratings (r=-0.26, 95% CI -0.41 to -0.10; P=.002). Temperature settings significantly influenced ratings (Kruskal-Wallis test, H=6.93; P=.03; η²=0.02), with higher scores at temperature 0.9 compared with 0.1 (Dunn's test, P=.04). Concerning learning outcomes, self-perceived improvement in diagnostic ability was reported by 94% (94/100) of students; however, final practical examination scores (mean 6.67, SD 1.42) did not differ significantly from those of the previous cohort without VSP training (mean 6.42, SD 1.56). Sentiment analysis indicated predominantly negative sentiment in GAI responses (median negativity 0.8903, IQR 0.306-0.961), consistent with clinical realism.</p><p><strong>Conclusions: </strong>GAI-driven VSPs were well-received by psychology students, with student gender and system-level variables (particularly temperature settings and connection stability) shaping user evaluations. Although participants perceived the training as beneficial for their diagnostic skills, objective examination performance did not signific","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e78857"},"PeriodicalIF":3.2,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12775747/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145811529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuki Morimoto, Kiyoshi Shikino, Yukihiro Nomura, Shoichi Ito
<p><strong>Background: </strong>The Japanese National Medical Licensing Examination (NMLE) is mandatory for all medical graduates seeking to become licensed physicians in Japan. Given the cultural emphasis on summative assessment, the NMLE has had a significant impact on Japanese medical education. Although the NMLE Content Guidelines have been revised approximately every five years over the last 2 decades, objective literature analyzing how the examination itself has evolved is absent.</p><p><strong>Objective: </strong>To provide a holistic view of the trends of the actual examination over time, this study used a combined rule-based and data-driven approach. We primarily focused on classifying the items according to the perspectives outlined in the NMLE Content Guidelines, complementing this approach with a natural language processing technique called topic modeling to identify latent topics.</p><p><strong>Methods: </strong>We collected publicly available NMLE data for 2001-2024. Six examination iterations (2880 items) were manually classified from 3 perspectives (level, content, and taxonomy) based on pre-established rules derived from the guidelines. Temporal trends within each classification were evaluated using the Cochran-Armitage test. Additionally, we conducted topic modeling for all 24 examination iterations (11,540 items) using the bidirectional encoder representations from transformers-based topic modeling framework. Temporal trends were traced using linear regression models of topic frequencies to identify topics growing in prominence.</p><p><strong>Results: </strong>In the level classification, the proportion of items addressing common or emergent diseases increased from 60% (115/193) to 76% (111/147; P<.001). In the content classification, the proportion of items assessing knowledge of pathophysiology decreased from 52% (237/459) to 33% (98/293; P<.001), whereas the proportion assessing practical knowledge of primary emergency care increased from 21% (95/459) to 29% (84/293; P<.001). In the taxonomy classification, the proportion of items that could be answered solely through simple recall of knowledge decreased from 51% (279/550) to 30% (118/400; P<.001), while the proportion assessing advanced analytical skills, such as interpreting and evaluating the meaning of each answer choice according to the given context, increased from 4% (21/550) to 19% (75/400; P<.001). Topic modeling identified 25 distinct topics, of which 10 exhibited an increasing trend. Non-organ-specific topics with notable increases included "comprehensive clinical items," "accountability in medical practice and patients' rights," "care, daily living support, and community health care," and "infection control and safety management in basic clinical procedures."</p><p><strong>Conclusions: </strong>This study identified significant shifts in the Japanese NMLE over the past 2 decades, suggesting that Japanese undergraduate medical education is evolving to place greate
{"title":"Trends in the Japanese National Medical Licensing Examination: Cross-Sectional Study.","authors":"Yuki Morimoto, Kiyoshi Shikino, Yukihiro Nomura, Shoichi Ito","doi":"10.2196/78214","DOIUrl":"10.2196/78214","url":null,"abstract":"<p><strong>Background: </strong>The Japanese National Medical Licensing Examination (NMLE) is mandatory for all medical graduates seeking to become licensed physicians in Japan. Given the cultural emphasis on summative assessment, the NMLE has had a significant impact on Japanese medical education. Although the NMLE Content Guidelines have been revised approximately every five years over the last 2 decades, objective literature analyzing how the examination itself has evolved is absent.</p><p><strong>Objective: </strong>To provide a holistic view of the trends of the actual examination over time, this study used a combined rule-based and data-driven approach. We primarily focused on classifying the items according to the perspectives outlined in the NMLE Content Guidelines, complementing this approach with a natural language processing technique called topic modeling to identify latent topics.</p><p><strong>Methods: </strong>We collected publicly available NMLE data for 2001-2024. Six examination iterations (2880 items) were manually classified from 3 perspectives (level, content, and taxonomy) based on pre-established rules derived from the guidelines. Temporal trends within each classification were evaluated using the Cochran-Armitage test. Additionally, we conducted topic modeling for all 24 examination iterations (11,540 items) using the bidirectional encoder representations from transformers-based topic modeling framework. Temporal trends were traced using linear regression models of topic frequencies to identify topics growing in prominence.</p><p><strong>Results: </strong>In the level classification, the proportion of items addressing common or emergent diseases increased from 60% (115/193) to 76% (111/147; P<.001). In the content classification, the proportion of items assessing knowledge of pathophysiology decreased from 52% (237/459) to 33% (98/293; P<.001), whereas the proportion assessing practical knowledge of primary emergency care increased from 21% (95/459) to 29% (84/293; P<.001). In the taxonomy classification, the proportion of items that could be answered solely through simple recall of knowledge decreased from 51% (279/550) to 30% (118/400; P<.001), while the proportion assessing advanced analytical skills, such as interpreting and evaluating the meaning of each answer choice according to the given context, increased from 4% (21/550) to 19% (75/400; P<.001). Topic modeling identified 25 distinct topics, of which 10 exhibited an increasing trend. Non-organ-specific topics with notable increases included \"comprehensive clinical items,\" \"accountability in medical practice and patients' rights,\" \"care, daily living support, and community health care,\" and \"infection control and safety management in basic clinical procedures.\"</p><p><strong>Conclusions: </strong>This study identified significant shifts in the Japanese NMLE over the past 2 decades, suggesting that Japanese undergraduate medical education is evolving to place greate","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e78214"},"PeriodicalIF":3.2,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12775762/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Susan Gijsbertje Brouwer de Koning, Amy Hofman, Sonja Gerber, Vera Lagerburg, Michelle van den Boorn
{"title":"Correction: Comparing the Perceived Realism and Adequacy of Venipuncture Training on an in-House Developed 3D-Printed Arm With a Commercially Available Arm: Randomized, Single-Blind, Cross-Over Study.","authors":"Susan Gijsbertje Brouwer de Koning, Amy Hofman, Sonja Gerber, Vera Lagerburg, Michelle van den Boorn","doi":"10.2196/89670","DOIUrl":"10.2196/89670","url":null,"abstract":"","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e89670"},"PeriodicalIF":3.2,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12721218/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145805599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlabelled: The rapid transformation of the health care landscape requires physicians to not only be skilled clinically but also navigate and lead a highly dynamic, innovation-driven environment. This also provides an avenue for physicians to significantly enhance their ability to help their patients, through participation in health innovation projects. Despite this growing need and opportunity, few medical schools provide formal training in innovation and entrepreneurship (I&E). In this perspective, we examine the need for I&E education in medical curricula by exploring student interest, effective program models, and implementation strategies. To better understand medical student interest in innovation and willingness to participate in I&E programs during medical school, we surveyed 480 medical students at our institution, the Johns Hopkins University School of Medicine, and received 90 responses with a 19% response rate. We observed a strong interest in health care I&E, with 97% (87/90) of respondents valuing knowledge or experience in I&E and 63% (56/90) expressing intent to incorporate I&E into their careers. To assess the real-world impact of I&E education on medical professionals, we surveyed 12 alumni of the Johns Hopkins Center for Bioengineering Innovation and Design (CBID) Master's program who had also completed medical school. Graduates reported that their experiences cultivated transferable skills-design thinking, interdisciplinary collaboration, and leadership-that shaped their professional trajectories. We propose three models for incorporating I&E education into existing medical curricula-short-term workshops, one-year gap programs, and longitudinal tracks-and discuss their advantages and trade-offs. Early and structured exposure to I&E education in medical school empowers students to identify unmet clinical needs, collaborate across disciplines, and develop real-world solutions. As the pace of innovation continues to accelerate, integration of I&E education into medical curricula offers a timely opportunity for medical schools to cultivate physician leaders in this space.
{"title":"The Need for Health Care Innovation Training in Medical Education.","authors":"Lily Zhu, Jeffrey Khong, Oren Wei, Katherine C Chretien, Youseph Yazdi","doi":"10.2196/79489","DOIUrl":"10.2196/79489","url":null,"abstract":"<p><strong>Unlabelled: </strong>The rapid transformation of the health care landscape requires physicians to not only be skilled clinically but also navigate and lead a highly dynamic, innovation-driven environment. This also provides an avenue for physicians to significantly enhance their ability to help their patients, through participation in health innovation projects. Despite this growing need and opportunity, few medical schools provide formal training in innovation and entrepreneurship (I&E). In this perspective, we examine the need for I&E education in medical curricula by exploring student interest, effective program models, and implementation strategies. To better understand medical student interest in innovation and willingness to participate in I&E programs during medical school, we surveyed 480 medical students at our institution, the Johns Hopkins University School of Medicine, and received 90 responses with a 19% response rate. We observed a strong interest in health care I&E, with 97% (87/90) of respondents valuing knowledge or experience in I&E and 63% (56/90) expressing intent to incorporate I&E into their careers. To assess the real-world impact of I&E education on medical professionals, we surveyed 12 alumni of the Johns Hopkins Center for Bioengineering Innovation and Design (CBID) Master's program who had also completed medical school. Graduates reported that their experiences cultivated transferable skills-design thinking, interdisciplinary collaboration, and leadership-that shaped their professional trajectories. We propose three models for incorporating I&E education into existing medical curricula-short-term workshops, one-year gap programs, and longitudinal tracks-and discuss their advantages and trade-offs. Early and structured exposure to I&E education in medical school empowers students to identify unmet clinical needs, collaborate across disciplines, and develop real-world solutions. As the pace of innovation continues to accelerate, integration of I&E education into medical curricula offers a timely opportunity for medical schools to cultivate physician leaders in this space.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e79489"},"PeriodicalIF":3.2,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716411/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145795037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roy La Touche, Álvaro Reina-Varona, Mónica Grande-Alonso, José Vicente León-Hernández, Joaquín Pardo-Montero, Néstor Requejo-Salinas, Raúl Ferrer-Peña, Alba Paris-Alemany
Background: Social media platforms are increasingly integrated into higher education, enabling collaborative, student-centered learning. Yet, few instruments specifically measure students' satisfaction with these activities across platforms. A brief, valid tool is needed to evaluate perceived quality and guide instructional design in social media-based learning environments.
Objective: This study investigated the use of social media as educational tools in the university environment, with the aim of designing and validating the CuSAERS (Questionnaire of Satisfaction With Educational Activities Performed on Social Media).
Methods: Using a mixed and sequential methodology, we explored the perceptions of bachelor's and master's degree students in physiotherapy who participated in teaching activities through X (formerly Twitter) and Instagram. The first phase of the project identified key dimensions of satisfaction from the literature, expert interviews, and cognitive interviews. The second phase assessed the psychometric properties of the CuSAERS in a sample of 150 students, addressing construct validity, internal reliability, concurrent validity, and discriminant validity.
Results: Exploratory factor analysis supported a 3-factor structure-perception of learning, task satisfaction/environment, and self-realization-explaining 61.9% of the variance, with acceptable overall reliability. Concurrent validity was supported by moderate correlations with the Academic Satisfaction Scale. Master's students reported higher scores than bachelor's students.
Conclusions: CuSAERS provides preliminary evidence as a promising measure of student satisfaction with social media-based learning activities; its use should remain formative and cautious until confirmatory and invariance analyses are completed.
{"title":"Student Satisfaction in Social Media-Based Learning Environments: Development, Validation, and Psychometric Evaluation of the CuSAERS (Questionnaire of Satisfaction With Educational Activities Performed on Social Media).","authors":"Roy La Touche, Álvaro Reina-Varona, Mónica Grande-Alonso, José Vicente León-Hernández, Joaquín Pardo-Montero, Néstor Requejo-Salinas, Raúl Ferrer-Peña, Alba Paris-Alemany","doi":"10.2196/73805","DOIUrl":"10.2196/73805","url":null,"abstract":"<p><strong>Background: </strong>Social media platforms are increasingly integrated into higher education, enabling collaborative, student-centered learning. Yet, few instruments specifically measure students' satisfaction with these activities across platforms. A brief, valid tool is needed to evaluate perceived quality and guide instructional design in social media-based learning environments.</p><p><strong>Objective: </strong>This study investigated the use of social media as educational tools in the university environment, with the aim of designing and validating the CuSAERS (Questionnaire of Satisfaction With Educational Activities Performed on Social Media).</p><p><strong>Methods: </strong>Using a mixed and sequential methodology, we explored the perceptions of bachelor's and master's degree students in physiotherapy who participated in teaching activities through X (formerly Twitter) and Instagram. The first phase of the project identified key dimensions of satisfaction from the literature, expert interviews, and cognitive interviews. The second phase assessed the psychometric properties of the CuSAERS in a sample of 150 students, addressing construct validity, internal reliability, concurrent validity, and discriminant validity.</p><p><strong>Results: </strong>Exploratory factor analysis supported a 3-factor structure-perception of learning, task satisfaction/environment, and self-realization-explaining 61.9% of the variance, with acceptable overall reliability. Concurrent validity was supported by moderate correlations with the Academic Satisfaction Scale. Master's students reported higher scores than bachelor's students.</p><p><strong>Conclusions: </strong>CuSAERS provides preliminary evidence as a promising measure of student satisfaction with social media-based learning activities; its use should remain formative and cautious until confirmatory and invariance analyses are completed.</p><p><strong>Trial registration: </strong>No applicable.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e73805"},"PeriodicalIF":3.2,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12759299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145795012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Janssen, Andrew Coggins, James Tadros, Deleana Quinn, Amith Shetty, Tim Shaw
Background: Electronic medical records (EMRs) are a potentially rich source of information on an individual's health care providers' clinical activities. These data provide an opportunity to tailor web-based learning for health care providers to align closely with their practice. There is increasing interest in the use of EMR data to understand performance and support continuous and targeted education for health care providers.
Objective: This study aims to understand the feasibility and acceptability of harnessing EMR data to adaptively deliver a web-based learning program to early-career physicians.
Methods: The intervention consisted of a microlearning program where content was adaptively delivered using an algorithm input with EMR data. The microlearning program content consisted of a library of questions covering topics related to best practice management of common emergency department presentations. Study participants were early-career physicians undergoing training in emergency care. The study design involved 3 design cycles, which iteratively changed aspects of the adaptive algorithm based on an end-of-cycle evaluation to optimize the intervention. At the end of each cycle, an online survey and analysis of learning platform metrics were used to evaluate the feasibility and acceptability of the program. Within each cycle, participants were recruited and enrolled in the adaptive program for 6 weeks, with new cohorts of participants in each cycle.
Results: Across each cycle, all 75 participants triggered at least 1 question from their EMR data, with the majority triggering 1 question per week. The majority of participants in the study indicated that the online program was engaging and the content felt aligned with clinical practice.
Conclusions: The use of EMR data to deliver an adaptive online learning program for emergency trainees is both feasible and acceptable. However, further research is required on the optimal design of such adaptive solutions to ensure training is closely aligned with clinical practice.
{"title":"Using Electronic Health Data to Deliver an Adaptive Online Learning Solution to Emergency Trainees: Mixed Methods Pilot Study.","authors":"Anna Janssen, Andrew Coggins, James Tadros, Deleana Quinn, Amith Shetty, Tim Shaw","doi":"10.2196/65287","DOIUrl":"10.2196/65287","url":null,"abstract":"<p><strong>Background: </strong>Electronic medical records (EMRs) are a potentially rich source of information on an individual's health care providers' clinical activities. These data provide an opportunity to tailor web-based learning for health care providers to align closely with their practice. There is increasing interest in the use of EMR data to understand performance and support continuous and targeted education for health care providers.</p><p><strong>Objective: </strong>This study aims to understand the feasibility and acceptability of harnessing EMR data to adaptively deliver a web-based learning program to early-career physicians.</p><p><strong>Methods: </strong>The intervention consisted of a microlearning program where content was adaptively delivered using an algorithm input with EMR data. The microlearning program content consisted of a library of questions covering topics related to best practice management of common emergency department presentations. Study participants were early-career physicians undergoing training in emergency care. The study design involved 3 design cycles, which iteratively changed aspects of the adaptive algorithm based on an end-of-cycle evaluation to optimize the intervention. At the end of each cycle, an online survey and analysis of learning platform metrics were used to evaluate the feasibility and acceptability of the program. Within each cycle, participants were recruited and enrolled in the adaptive program for 6 weeks, with new cohorts of participants in each cycle.</p><p><strong>Results: </strong>Across each cycle, all 75 participants triggered at least 1 question from their EMR data, with the majority triggering 1 question per week. The majority of participants in the study indicated that the online program was engaging and the content felt aligned with clinical practice.</p><p><strong>Conclusions: </strong>The use of EMR data to deliver an adaptive online learning program for emergency trainees is both feasible and acceptable. However, further research is required on the optimal design of such adaptive solutions to ensure training is closely aligned with clinical practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e65287"},"PeriodicalIF":3.2,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12711133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145776069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}