首页 > 最新文献

JMIR Medical Education最新文献

英文 中文
Correlation Between Electroencephalogram Brain-to-Brain Synchronization and Team Strategies and Tools to Enhance Performance and Patient Safety Scores During Online Hexad Virtual Simulation-Based Interprofessional Education: Cross-Sectional Correlational Study. 在基于在线十六进制虚拟模拟的跨专业教育中,脑电图脑对脑同步与团队策略和工具之间的相关性:横断面相关性研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-20 DOI: 10.2196/69725
Atthaphon Viriyopase, Khuansiri Narajeenron

Background: Team performance is crucial in crisis situations. Although the Thai version of Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) has been validated, challenges remain due to its subjective evaluation. To date, no studies have examined the relationship between electroencephalogram (EEG) activity and team performance, as assessed by TeamSTEPPS, during virtual simulation-based interprofessional education (SIMBIE), where face-to-face communication is absent.

Objective: This study aims to investigate the correlation between EEG-based brain-to-brain synchronization and TeamSTEPPS scores in multiprofessional teams participating in virtual SIMBIE sessions.

Methods: This single-center study involved 90 participants (15 groups of 6 simulated professionals: 1 medical doctor, 2 nurses, 1 pharmacist, 1 medical technologist, and 1 radiological technologist). Each group completed two 30-minute virtual SIMBIE sessions focusing on team training in a crisis situation involving COVID-19 pneumonia with a difficult airway, resulting in 30 sessions in total. The TeamSTEPPS scores of each participant across 5 domains were independently assessed by 2 trained raters based on screen recording, and their average values were used. The scores of participants in the same session were aggregated to generate a group TeamSTEPPS score, representing group-level performance. EEG data were recorded using wireless EEG acquisition devices and computed for total interdependence (TI), which represents brain-to-brain synchronization. The TI values of participants in the same session were aggregated to produce a group TI, representing group-level brain-to-brain synchronization. We investigated the Pearson correlations between the TI and the scores at both the group and individual levels.

Results: Interrater reliability for the TeamSTEPPS scores among 12 raters indicated good agreement on average (mean 0.73, SD 0.18; range 0.32-0.999). At the individual level, the Pearson correlations between the TI and the scores were weak and not statistically significant across all TeamSTEPPS domains (all adjusted P≥.05). However, strongly negative, statistically significant correlations between the group TI and the group TeamSTEPPS scores in the alpha frequency band (8-12 Hz) of the anterior brain area were found across all TeamSTEPPS domains after correcting for multiple comparisons (mean -0.87, SD 0.06; range -0.93 to -0.8).

Conclusions: Strong negative correlations between the group TI and the group TeamSTEPPS scores were observed in the anterior alpha activity during online hexad virtual SIMBIE. These findings suggest that anterior alpha TI may serve as an objective metric for assessing TeamSTEPPS-based team performance.

背景:团队表现在危机情况下是至关重要的。尽管泰国版的团队战略和工具以提高绩效和患者安全(TeamSTEPPS)已经得到验证,但由于其主观评价,挑战仍然存在。到目前为止,还没有研究考察了在基于虚拟模拟的跨专业教育(SIMBIE)中脑电图(EEG)活动与团队绩效之间的关系,如TeamSTEPPS所评估的,在没有面对面交流的情况下。目的:研究参与SIMBIE虚拟游戏的多专业团队基于脑电图的脑对脑同步与TeamSTEPPS得分的相关性。方法:本研究采用单中心研究,共纳入90名受试者(15组,每组6名模拟专业人员:1名医生、2名护士、1名药剂师、1名医疗技师和1名放射技师)。每组完成两次30分钟的虚拟SIMBIE课程,重点是在COVID-19肺炎和气道困难的危机情况下进行团队训练,总共30次课程。每个参与者在5个领域的TeamSTEPPS得分由2名训练有素的评分员根据屏幕记录独立评估,并使用他们的平均值。同一会议中参与者的分数被汇总成一个小组TeamSTEPPS分数,代表小组水平的表现。利用无线脑电信号采集设备记录脑电图数据,并计算脑对脑同步的总依赖关系(TI)。同一会话中参与者的TI值被聚合以产生组TI,代表组级脑对脑同步。我们在小组和个人水平上调查了TI和分数之间的Pearson相关性。结果:12个评分者的TeamSTEPPS评分的信度平均一致性较好(平均值0.73,标准差0.18;范围0.32-0.999)。在个体水平上,TI与得分之间的Pearson相关性较弱,在所有TeamSTEPPS域中均无统计学意义(均校正P≥0.05)。然而,经过多次比较校正后,在所有TeamSTEPPS域中,TI组和TeamSTEPPS组在前脑区α频段(8-12 Hz)得分之间存在强烈的负相关,具有统计学意义(平均值-0.87,SD 0.06;范围-0.93至-0.8)。结论:TI组和TeamSTEPPS组在在线六回合虚拟SIMBIE的前α活动中观察到强烈的负相关。这些发现表明,前alpha TI可以作为评估基于teamsteps的团队绩效的客观指标。
{"title":"Correlation Between Electroencephalogram Brain-to-Brain Synchronization and Team Strategies and Tools to Enhance Performance and Patient Safety Scores During Online Hexad Virtual Simulation-Based Interprofessional Education: Cross-Sectional Correlational Study.","authors":"Atthaphon Viriyopase, Khuansiri Narajeenron","doi":"10.2196/69725","DOIUrl":"10.2196/69725","url":null,"abstract":"<p><strong>Background: </strong>Team performance is crucial in crisis situations. Although the Thai version of Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) has been validated, challenges remain due to its subjective evaluation. To date, no studies have examined the relationship between electroencephalogram (EEG) activity and team performance, as assessed by TeamSTEPPS, during virtual simulation-based interprofessional education (SIMBIE), where face-to-face communication is absent.</p><p><strong>Objective: </strong>This study aims to investigate the correlation between EEG-based brain-to-brain synchronization and TeamSTEPPS scores in multiprofessional teams participating in virtual SIMBIE sessions.</p><p><strong>Methods: </strong>This single-center study involved 90 participants (15 groups of 6 simulated professionals: 1 medical doctor, 2 nurses, 1 pharmacist, 1 medical technologist, and 1 radiological technologist). Each group completed two 30-minute virtual SIMBIE sessions focusing on team training in a crisis situation involving COVID-19 pneumonia with a difficult airway, resulting in 30 sessions in total. The TeamSTEPPS scores of each participant across 5 domains were independently assessed by 2 trained raters based on screen recording, and their average values were used. The scores of participants in the same session were aggregated to generate a group TeamSTEPPS score, representing group-level performance. EEG data were recorded using wireless EEG acquisition devices and computed for total interdependence (TI), which represents brain-to-brain synchronization. The TI values of participants in the same session were aggregated to produce a group TI, representing group-level brain-to-brain synchronization. We investigated the Pearson correlations between the TI and the scores at both the group and individual levels.</p><p><strong>Results: </strong>Interrater reliability for the TeamSTEPPS scores among 12 raters indicated good agreement on average (mean 0.73, SD 0.18; range 0.32-0.999). At the individual level, the Pearson correlations between the TI and the scores were weak and not statistically significant across all TeamSTEPPS domains (all adjusted P≥.05). However, strongly negative, statistically significant correlations between the group TI and the group TeamSTEPPS scores in the alpha frequency band (8-12 Hz) of the anterior brain area were found across all TeamSTEPPS domains after correcting for multiple comparisons (mean -0.87, SD 0.06; range -0.93 to -0.8).</p><p><strong>Conclusions: </strong>Strong negative correlations between the group TI and the group TeamSTEPPS scores were observed in the anterior alpha activity during online hexad virtual SIMBIE. These findings suggest that anterior alpha TI may serve as an objective metric for assessing TeamSTEPPS-based team performance.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e69725"},"PeriodicalIF":3.2,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12583944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145337560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI's Accuracy in Extracting Learning Experiences From Clinical Practice Logs: Observational Study. 人工智能从临床实践日志中提取学习经验的准确性:观察性研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-15 DOI: 10.2196/68697
Takeshi Kondo, Hiroshi Nishigori

Background: Improving the quality of education in clinical settings requires an understanding of learners' experiences and learning processes. However, this is a significant burden on learners and educators. If learners' learning records could be automatically analyzed and their experiences could be visualized, this would enable real-time tracking of their progress. Large language models (LLMs) may be useful for this purpose, although their accuracy has not been sufficiently studied.

Objective: This study aimed to explore the accuracy of predicting the actual clinical experiences of medical students from their learning log data during clinical clerkship using LLMs.

Methods: This study was conducted at the Nagoya University School of Medicine. Learning log data from medical students participating in a clinical clerkship from April 22, 2024, to May 24, 2024, were used. The Model Core Curriculum for Medical Education was used as a template to extract experiences. OpenAI's ChatGPT was selected for this task after a comparison with other LLMs. Prompts were created using the learning log data and provided to ChatGPT to extract experiences, which were then listed. A web application using GPT-4-turbo was developed to automate this process. The accuracy of the extracted experiences was evaluated by comparing them with the corrected lists provided by the students.

Results: A total of 20 sixth-year medical students participated in this study, resulting in 40 datasets. The overall Jaccard index was 0.59 (95% CI 0.46-0.71), and the Cohen κ was 0.65 (95% CI 0.53-0.76). Overall sensitivity was 62.39% (95% CI 49.96%-74.81%), and specificity was 99.34% (95% CI 98.77%-99.92%). Category-specific performance varied: symptoms showed a sensitivity of 45.43% (95% CI 25.12%-65.75%) and specificity of 98.75% (95% CI 97.31%-100%), examinations showed a sensitivity of 46.76% (95% CI 25.67%-67.86%) and specificity of 98.84% (95% CI 97.81%-99.87%), and procedures achieved a sensitivity of 56.36% (95% CI 37.64%-75.08%) and specificity of 98.92% (95% CI 96.67%-100%). The results suggest that GPT-4-turbo accurately identified many of the actual experiences but missed some because of insufficient detail or a lack of student records.

Conclusions: This study demonstrated that LLMs such as GPT-4-turbo can predict clinical experiences from learning logs with high specificity but moderate sensitivity. Future improvements in AI models, providing feedback to medical students' learning logs and combining them with other data sources such as electronic medical records, may enhance the accuracy. Using artificial intelligence to analyze learning logs for assessment could reduce the burden on learners and educators while improving the quality of educational assessments in medical education.

背景:提高临床教育质量需要了解学习者的经历和学习过程。然而,这对学习者和教育者来说是一个沉重的负担。如果学习者的学习记录可以自动分析,他们的经验可以可视化,这将使实时跟踪他们的进步成为可能。大型语言模型(llm)可能对这一目的有用,尽管它们的准确性还没有得到充分的研究。目的:本研究旨在探讨利用LLMs从医学生临床见习学习日志数据预测医学生实际临床经验的准确性。方法:本研究在名古屋大学医学院进行。使用2024年4月22日至2024年5月24日参加临床实习的医学生的学习日志数据。以《医学教育核心课程示范》为模板提取经验。经过与其他llm的比较,我们选择OpenAI的ChatGPT来完成这个任务。使用学习日志数据创建提示,并提供给ChatGPT以提取经验,然后将其列出。开发了一个使用GPT-4-turbo的web应用程序来自动化此过程。通过将所提取的经验与学生提供的更正列表进行比较,来评估其准确性。结果:共有20名六年级医学生参与本研究,共获得40个数据集。总体Jaccard指数为0.59 (95% CI 0.46 ~ 0.71), Cohen κ为0.65 (95% CI 0.53 ~ 0.76)。总敏感性为62.39% (95% CI 49.96% ~ 74.81%),特异性为99.34% (95% CI 98.77% ~ 99.92%)。分类特异性表现不同:症状的敏感性为45.43% (95% CI 25.12%-65.75%),特异性为98.75% (95% CI 97.31%-100%),检查的敏感性为46.76% (95% CI 25.67%-67.86%),特异性为98.84% (95% CI 97.81%-99.87%),手术的敏感性为56.36% (95% CI 37.64%-75.08%),特异性为98.92% (95% CI 96.67%-100%)。结果表明,GPT-4-turbo准确地识别了许多实际经历,但由于细节不足或缺乏学生记录而遗漏了一些。结论:本研究表明,GPT-4-turbo等LLMs可以通过学习日志预测临床经验,特异性高,敏感性中等。AI模型的未来改进,为医学生的学习日志提供反馈,并将其与电子病历等其他数据源相结合,可能会提高准确性。利用人工智能分析学习日志进行评估可以减轻学习者和教育者的负担,同时提高医学教育中教育评估的质量。
{"title":"AI's Accuracy in Extracting Learning Experiences From Clinical Practice Logs: Observational Study.","authors":"Takeshi Kondo, Hiroshi Nishigori","doi":"10.2196/68697","DOIUrl":"10.2196/68697","url":null,"abstract":"<p><strong>Background: </strong>Improving the quality of education in clinical settings requires an understanding of learners' experiences and learning processes. However, this is a significant burden on learners and educators. If learners' learning records could be automatically analyzed and their experiences could be visualized, this would enable real-time tracking of their progress. Large language models (LLMs) may be useful for this purpose, although their accuracy has not been sufficiently studied.</p><p><strong>Objective: </strong>This study aimed to explore the accuracy of predicting the actual clinical experiences of medical students from their learning log data during clinical clerkship using LLMs.</p><p><strong>Methods: </strong>This study was conducted at the Nagoya University School of Medicine. Learning log data from medical students participating in a clinical clerkship from April 22, 2024, to May 24, 2024, were used. The Model Core Curriculum for Medical Education was used as a template to extract experiences. OpenAI's ChatGPT was selected for this task after a comparison with other LLMs. Prompts were created using the learning log data and provided to ChatGPT to extract experiences, which were then listed. A web application using GPT-4-turbo was developed to automate this process. The accuracy of the extracted experiences was evaluated by comparing them with the corrected lists provided by the students.</p><p><strong>Results: </strong>A total of 20 sixth-year medical students participated in this study, resulting in 40 datasets. The overall Jaccard index was 0.59 (95% CI 0.46-0.71), and the Cohen κ was 0.65 (95% CI 0.53-0.76). Overall sensitivity was 62.39% (95% CI 49.96%-74.81%), and specificity was 99.34% (95% CI 98.77%-99.92%). Category-specific performance varied: symptoms showed a sensitivity of 45.43% (95% CI 25.12%-65.75%) and specificity of 98.75% (95% CI 97.31%-100%), examinations showed a sensitivity of 46.76% (95% CI 25.67%-67.86%) and specificity of 98.84% (95% CI 97.81%-99.87%), and procedures achieved a sensitivity of 56.36% (95% CI 37.64%-75.08%) and specificity of 98.92% (95% CI 96.67%-100%). The results suggest that GPT-4-turbo accurately identified many of the actual experiences but missed some because of insufficient detail or a lack of student records.</p><p><strong>Conclusions: </strong>This study demonstrated that LLMs such as GPT-4-turbo can predict clinical experiences from learning logs with high specificity but moderate sensitivity. Future improvements in AI models, providing feedback to medical students' learning logs and combining them with other data sources such as electronic medical records, may enhance the accuracy. Using artificial intelligence to analyze learning logs for assessment could reduce the burden on learners and educators while improving the quality of educational assessments in medical education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68697"},"PeriodicalIF":3.2,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12529426/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145303860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Hype to Implementation: Embedding GPT-4o in Medical Education. 从炒作到实施:将gpt - 40嵌入医学教育。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-15 DOI: 10.2196/79309
Sumaia Sabouni, Mohammad-Adel Moufti, Mohamed Hassan Taha

Unlabelled: The release of GPT-4 Omni (GPT-4o), an advanced multimodal generative artificial intelligence (AI) model, generated substantial enthusiasm in the field of higher education. However, one year later, medical education continues to face significant challenges, demonstrating the need to move from initial experimentation with the integration of multimodal AIs in medical education toward meaningful integration. In this Viewpoint, we argue that GPT-4o's true value lies not in novelty, but in its potential to enhance training in communication skills, clinical reasoning, and procedural skills by offering real-time simulations and adaptive learning experiences using text, audio, and visual inputs in a safe, immersive, and cost-effective environment. We explore how this innovation has made it possible to address key medical educational challenges by simulating realistic patient interactions, offering personalized feedback, and reducing educator workloads and costs, where traditional teaching methods struggle to replicate the complexity and dynamism of real-world clinical scenarios. However, we also address the critical challenges of this approach, including data accuracy, bias, and ethical decision-making. Rather than seeing GPT-4o as a replacement, we propose its use as a strategic supplement, scaffolded into curriculum frameworks and evaluated through ongoing research. As the focus shifts from AI novelty to sustainable implementation, we call on educators, policymakers, and curriculum designers to establish governance mechanisms, pilot evaluation strategies, and develop faculty training. The future of AI in medical education depends not on the next breakthrough, but on how we integrate today's tools with intention and rigor.

未标记:GPT-4 Omni (gpt - 40)的发布,一种先进的多模态生成人工智能(AI)模型,在高等教育领域引起了极大的热情。然而,一年后,医学教育继续面临重大挑战,这表明需要从最初的实验转向医学教育中多模式人工智能的整合,以实现有意义的整合。在这一观点中,我们认为gpt - 40的真正价值不在于新颖,而在于它通过提供实时模拟和自适应学习体验,在安全、沉浸式和经济高效的环境中使用文本、音频和视觉输入,增强沟通技巧、临床推理和程序技能的培训潜力。在传统教学方法难以复制现实世界临床场景的复杂性和动态性的情况下,我们探索这种创新如何通过模拟真实的患者互动、提供个性化反馈、减少教育者的工作量和成本,使解决关键的医学教育挑战成为可能。然而,我们也解决了这种方法的关键挑战,包括数据准确性、偏见和道德决策。与其将gpt - 40视为替代品,我们建议将其作为一种战略补充,纳入课程框架,并通过正在进行的研究进行评估。随着焦点从人工智能的新颖性转向可持续实施,我们呼吁教育工作者、政策制定者和课程设计师建立治理机制,试点评估策略,并开展教师培训。人工智能在医学教育中的未来不取决于下一个突破,而取决于我们如何将今天的工具与意图和严谨结合起来。
{"title":"From Hype to Implementation: Embedding GPT-4o in Medical Education.","authors":"Sumaia Sabouni, Mohammad-Adel Moufti, Mohamed Hassan Taha","doi":"10.2196/79309","DOIUrl":"10.2196/79309","url":null,"abstract":"<p><strong>Unlabelled: </strong>The release of GPT-4 Omni (GPT-4o), an advanced multimodal generative artificial intelligence (AI) model, generated substantial enthusiasm in the field of higher education. However, one year later, medical education continues to face significant challenges, demonstrating the need to move from initial experimentation with the integration of multimodal AIs in medical education toward meaningful integration. In this Viewpoint, we argue that GPT-4o's true value lies not in novelty, but in its potential to enhance training in communication skills, clinical reasoning, and procedural skills by offering real-time simulations and adaptive learning experiences using text, audio, and visual inputs in a safe, immersive, and cost-effective environment. We explore how this innovation has made it possible to address key medical educational challenges by simulating realistic patient interactions, offering personalized feedback, and reducing educator workloads and costs, where traditional teaching methods struggle to replicate the complexity and dynamism of real-world clinical scenarios. However, we also address the critical challenges of this approach, including data accuracy, bias, and ethical decision-making. Rather than seeing GPT-4o as a replacement, we propose its use as a strategic supplement, scaffolded into curriculum frameworks and evaluated through ongoing research. As the focus shifts from AI novelty to sustainable implementation, we call on educators, policymakers, and curriculum designers to establish governance mechanisms, pilot evaluation strategies, and develop faculty training. The future of AI in medical education depends not on the next breakthrough, but on how we integrate today's tools with intention and rigor.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e79309"},"PeriodicalIF":3.2,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12527310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145303795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training Gaps in Digital Skills for the Cancer Health Care Workforce Based on Insights From Clinical Professionals, Nonclinical Professionals, and Patients and Caregivers: Qualitative Study. 基于临床专业人员、非临床专业人员、患者和护理人员见解的癌症卫生保健人员数字技能培训差距:定性研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-08 DOI: 10.2196/78490
David Liñares, Theologia Tsitsi, Noemí López-Rey, Wilfredo Guanipa-Sierra, Susana Aldecoa-Landesa, Carme Carrión, Daniela Cabutto, Deborah Moreno-Alonso, Clara Madrid-Alejos, Andreas Charalambous, Ana Clavería

Background: The integration of digital technologies is becoming increasingly essential in cancer care. However, limited digital health literacy among clinical and nonclinical cancer health care professionals poses significant challenges to effective implementation and sustainability over time. To address this, the European Union is prioritizing the development of targeted digital skills training programs for cancer care providers, the TRANSiTION project among them. A crucial initial step in this effort is conducting a comprehensive gap analysis to identify specific training needs.

Objective: The aim of this work is to identify training gaps and prioritize the digital skill development needs in the oncology health care workforce.

Methods: An importance-performance analysis (IPA) was conducted following a survey that assessed the performance and importance of 7 digital skills: information, communication, content creation, safety, eHealth problem-solving, ethics, and patient empowerment.

Results: A total of 67 participants from 11 European countries completed the study: 38 clinical professionals (CP), 16 nonclinical professionals (NCP), and 13 patients or caregivers (PC). CP acknowledged the need for a comprehensive training program that includes all 7 digital skills. Digital patient empowerment and safety skills emerge as the highest priorities for both CP and NCP. Conversely, NCP assigned a lower priority to digital content creation skills, and PC assigned a lower priority to digital information and ethical skills. The IPA also revealed discrepancies in digital communication skills across groups (H=6.50; P=.04).

Conclusions: The study showcased the pressing need for comprehensive digital skill training for cancer health care professionals across diverse backgrounds and health care systems in Europe, tailored to their occupation and care setting. Incorporating PC perspectives ensures a balanced approach to addressing these training gaps. These findings provide a valuable knowledge base for designing digital skills training programs, promoting a holistic approach that integrates the perspectives of the various stakeholders involved in digital cancer care.

背景:数字技术的整合在癌症治疗中变得越来越重要。然而,随着时间的推移,临床和非临床癌症医疗保健专业人员的数字健康素养有限,对有效实施和可持续性构成重大挑战。为了解决这个问题,欧盟正在优先为癌症护理提供者开发有针对性的数字技能培训计划,其中包括过渡项目。这项工作的关键初步步骤是进行全面的差距分析,以确定具体的培训需要。目的:这项工作的目的是确定培训差距,并优先考虑肿瘤卫生保健工作人员的数字技能发展需求。方法:通过对7项数字技能(信息、沟通、内容创作、安全、电子健康问题解决、伦理和患者授权)的绩效和重要性进行调查,进行重要性-绩效分析(IPA)。结果:来自11个欧洲国家的67名参与者完成了这项研究:38名临床专业人员(CP), 16名非临床专业人员(NCP)和13名患者或护理人员(PC)。CP承认需要一个全面的培训计划,包括所有7项数字技能。数字化患者赋权和安全技能成为新型冠状病毒和新型冠状病毒的最高优先事项。相反,NCP对数字内容创造技能的优先级较低,PC对数字信息和道德技能的优先级较低。IPA还揭示了不同群体在数字沟通技能方面的差异(H=6.50; P= 0.04)。结论:该研究表明,迫切需要针对欧洲不同背景和医疗保健系统的癌症医疗保健专业人员进行全面的数字技能培训,以适应他们的职业和护理环境。结合PC的观点确保了平衡的方法来解决这些培训差距。这些发现为设计数字技能培训计划提供了宝贵的知识基础,促进了一种整合数字化癌症治疗中各个利益相关者观点的整体方法。
{"title":"Training Gaps in Digital Skills for the Cancer Health Care Workforce Based on Insights From Clinical Professionals, Nonclinical Professionals, and Patients and Caregivers: Qualitative Study.","authors":"David Liñares, Theologia Tsitsi, Noemí López-Rey, Wilfredo Guanipa-Sierra, Susana Aldecoa-Landesa, Carme Carrión, Daniela Cabutto, Deborah Moreno-Alonso, Clara Madrid-Alejos, Andreas Charalambous, Ana Clavería","doi":"10.2196/78490","DOIUrl":"10.2196/78490","url":null,"abstract":"<p><strong>Background: </strong>The integration of digital technologies is becoming increasingly essential in cancer care. However, limited digital health literacy among clinical and nonclinical cancer health care professionals poses significant challenges to effective implementation and sustainability over time. To address this, the European Union is prioritizing the development of targeted digital skills training programs for cancer care providers, the TRANSiTION project among them. A crucial initial step in this effort is conducting a comprehensive gap analysis to identify specific training needs.</p><p><strong>Objective: </strong>The aim of this work is to identify training gaps and prioritize the digital skill development needs in the oncology health care workforce.</p><p><strong>Methods: </strong>An importance-performance analysis (IPA) was conducted following a survey that assessed the performance and importance of 7 digital skills: information, communication, content creation, safety, eHealth problem-solving, ethics, and patient empowerment.</p><p><strong>Results: </strong>A total of 67 participants from 11 European countries completed the study: 38 clinical professionals (CP), 16 nonclinical professionals (NCP), and 13 patients or caregivers (PC). CP acknowledged the need for a comprehensive training program that includes all 7 digital skills. Digital patient empowerment and safety skills emerge as the highest priorities for both CP and NCP. Conversely, NCP assigned a lower priority to digital content creation skills, and PC assigned a lower priority to digital information and ethical skills. The IPA also revealed discrepancies in digital communication skills across groups (H=6.50; P=.04).</p><p><strong>Conclusions: </strong>The study showcased the pressing need for comprehensive digital skill training for cancer health care professionals across diverse backgrounds and health care systems in Europe, tailored to their occupation and care setting. Incorporating PC perspectives ensures a balanced approach to addressing these training gaps. These findings provide a valuable knowledge base for designing digital skills training programs, promoting a holistic approach that integrates the perspectives of the various stakeholders involved in digital cancer care.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e78490"},"PeriodicalIF":3.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12547342/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT in Medical Education: Bibliometric and Visual Analysis. ChatGPT在医学教育中的应用:文献计量学和视觉分析。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-07 DOI: 10.2196/72356
Yuning Zhang, Xiaolu Xie, Qi Xu
<p><strong>Background: </strong>ChatGPT is a generative artificial intelligence-based chatbot developed by OpenAI. Since its release in the second half of 2022, it has been widely applied across various fields. In particular, the application of ChatGPT in medical education has become a significant trend. To gain a comprehensive understanding of the research developments and trends regarding ChatGPT in medical education, we conducted an extensive review and analysis of the current state of research in this field.</p><p><strong>Objective: </strong>This study used bibliometric and visualization analysis to explore the current state of research and development trends regarding ChatGPT in medical education.</p><p><strong>Methods: </strong>A bibliometric analysis of 407 articles on ChatGPT in medical education published between March 2023 and June 2025 was conducted using CiteSpace, VOSviewer, and Bibliometrix (RTool of RStudio). Visualization of countries, institutions, journals, authors, keywords, and references was also conducted.</p><p><strong>Results: </strong>This bibliometric analysis included a total of 407 studies. Research in this field began in 2023, showing a notable surge in annual publications until June 2025. The United States, China, Türkiye, the United Kingdom, and Canada produced the most publications. Networks of collaboration also formed among institutions. The University of California system was a core research institution, with 3.4% (14/407) of the publications and 0.17 betweenness centrality. BMC Medical Education, Medical Teacher, and the Journal of Medical Internet Research were all among the top 10 journals in terms of both publication volume and citation frequency. The most prolific author was Yavuz Selim Kiyak, who has established a stable collaboration network with Isil Irem Budakoglu and Ozlem Coskun. Author collaboration in this field is usually limited, with most academic research conducted by independent teams and little communication between teams. The most frequent keywords were "AI," "ChatGPT," and "medical education." Keyword analysis further revealed "educational assessment," "exam," and "clinical practice" as current research hot spots. The most cited paper was "Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models," and the paper with the strongest citation burst was "Are ChatGPT's Knowledge and Interpretation Ability Comparable to Those of Medical Students in Korea for Taking a Parasitology Examination?: A Descriptive Study." Both papers focus on evaluating ChatGPT's performance in medical exams.</p><p><strong>Conclusions: </strong>This study reveals the significant potential of ChatGPT in medical education. As the technology improves, its applications will expand into more fields. To promote the diversification and effectiveness of ChatGPT in medical education, future research should strengthen interregional collaboration and enhance research quality. These fin
背景:ChatGPT是OpenAI公司开发的基于生成式人工智能的聊天机器人。自2022年下半年发布以来,它已被广泛应用于各个领域。特别是ChatGPT在医学教育中的应用已经成为一个重要的趋势。为了全面了解ChatGPT在医学教育中的研究进展和趋势,我们对该领域的研究现状进行了广泛的回顾和分析。目的:采用文献计量学和可视化分析方法,探讨ChatGPT在医学教育中的研究现状和发展趋势。方法:采用CiteSpace、VOSviewer和Bibliometrix (RStudio的RTool)对2023年3月至2025年6月发表的407篇医学教育领域ChatGPT相关文献进行文献计量学分析。还对国家、机构、期刊、作者、关键词和参考文献进行了可视化。结果:文献计量学分析共纳入407项研究。这一领域的研究始于2023年,直到2025年6月,年度出版物都出现了显著的增长。美国、中国、日本、英国和加拿大发表的出版物最多。各机构之间也形成了合作网络。加州大学系统是核心研究机构,发表量占3.4%(14/407),中间中心性为0.17。《BMC Medical Education》、《Medical Teacher》、《Journal of Medical Internet Research》均在发表量和被引频次排名前十的期刊之列。最多产的作者是Yavuz Selim Kiyak,他与Isil Irem Budakoglu和Ozlem Coskun建立了稳定的合作网络。该领域的作者合作通常是有限的,大多数学术研究都是由独立的团队进行的,团队之间的交流很少。最常见的关键词是“人工智能”、“ChatGPT”和“医学教育”。关键词分析进一步揭示了“教育评价”、“考试”和“临床实践”是当前的研究热点。被引次数最多的论文是《ChatGPT在USMLE上的表现:使用大型语言模型的人工智能辅助医学教育的潜力》,被引次数最多的论文是《ChatGPT的知识和解释能力与韩国医学生参加寄生虫学考试的能力是否相当?》:一项描述性研究。两篇论文都着重于评估ChatGPT在医学考试中的表现。结论:本研究揭示了ChatGPT在医学教育中的巨大潜力。随着技术的进步,它的应用将扩展到更多的领域。为了促进ChatGPT在医学教育中的多样化和有效性,未来的研究应加强区域间的合作,提高研究质量。这些发现为研究人员确定研究视角和指导未来的研究方向提供了有价值的见解。
{"title":"ChatGPT in Medical Education: Bibliometric and Visual Analysis.","authors":"Yuning Zhang, Xiaolu Xie, Qi Xu","doi":"10.2196/72356","DOIUrl":"10.2196/72356","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;ChatGPT is a generative artificial intelligence-based chatbot developed by OpenAI. Since its release in the second half of 2022, it has been widely applied across various fields. In particular, the application of ChatGPT in medical education has become a significant trend. To gain a comprehensive understanding of the research developments and trends regarding ChatGPT in medical education, we conducted an extensive review and analysis of the current state of research in this field.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Objective: &lt;/strong&gt;This study used bibliometric and visualization analysis to explore the current state of research and development trends regarding ChatGPT in medical education.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;A bibliometric analysis of 407 articles on ChatGPT in medical education published between March 2023 and June 2025 was conducted using CiteSpace, VOSviewer, and Bibliometrix (RTool of RStudio). Visualization of countries, institutions, journals, authors, keywords, and references was also conducted.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;This bibliometric analysis included a total of 407 studies. Research in this field began in 2023, showing a notable surge in annual publications until June 2025. The United States, China, Türkiye, the United Kingdom, and Canada produced the most publications. Networks of collaboration also formed among institutions. The University of California system was a core research institution, with 3.4% (14/407) of the publications and 0.17 betweenness centrality. BMC Medical Education, Medical Teacher, and the Journal of Medical Internet Research were all among the top 10 journals in terms of both publication volume and citation frequency. The most prolific author was Yavuz Selim Kiyak, who has established a stable collaboration network with Isil Irem Budakoglu and Ozlem Coskun. Author collaboration in this field is usually limited, with most academic research conducted by independent teams and little communication between teams. The most frequent keywords were \"AI,\" \"ChatGPT,\" and \"medical education.\" Keyword analysis further revealed \"educational assessment,\" \"exam,\" and \"clinical practice\" as current research hot spots. The most cited paper was \"Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models,\" and the paper with the strongest citation burst was \"Are ChatGPT's Knowledge and Interpretation Ability Comparable to Those of Medical Students in Korea for Taking a Parasitology Examination?: A Descriptive Study.\" Both papers focus on evaluating ChatGPT's performance in medical exams.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;This study reveals the significant potential of ChatGPT in medical education. As the technology improves, its applications will expand into more fields. To promote the diversification and effectiveness of ChatGPT in medical education, future research should strengthen interregional collaboration and enhance research quality. These fin","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e72356"},"PeriodicalIF":3.2,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12503443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effectiveness of a Fully Online Scientific Research Works Peer Support Group Model for Research Capacity Building Through Conducting Systematic Reviews Among Health Care Professionals: Retrospective Cohort Studies. 通过在卫生保健专业人员中进行系统评价进行研究能力建设的完全在线科研工作同伴支持小组模型的有效性:回顾性队列研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-02 DOI: 10.2196/78862
Yuki Kataoka, Ryuhei So, Masahiro Banno, Yasushi Tsujimoto

Background: Research capacity building (RCB) among health care professionals remains limited, particularly for those working outside academic institutions. Japan is experiencing a decline in original clinical research due to insufficient RCB infrastructure. Our previous hospital-based workshops were effective but faced geographical and sustainability constraints. We developed a fully online Scientific Research Works Peer Support Group (SRWS-PSG) model that addresses geographical and time-bound constraints and establishes a sustainable economic model. Mentees use online materials, receive support from mentors via a communication platform after formulating their research question, and transition into mentors upon publication.

Objective: We evaluated whether our model's theoretical benefits translated into actual program effectiveness in RCB among health care professionals.

Methods: We conducted a retrospective cohort study of health care professionals who participated in the SRWS-PSG program between September 2019 and January 2025. Mentees progressed through a structured modular curriculum covering systematic review methodology, from protocol development to manuscript preparation, with personalized mentoring support. We evaluated manuscript submission, program discontinuation, promotion to a mentor status, and mentor response time. We collected data from program records and chat logs. Manuscript submission was defined as mentor-confirmed submission of a systematic review manuscript to a peer-reviewed journal. Program discontinuation referred to formal withdrawal before manuscript submission. Mentor promotion was defined as acceptance of an invitation to serve as a junior mentor after manuscript submission. Mentor response time was the elapsed time from a mentee's question in the chat to the first reply by an assigned mentor.

Results: Of 85 mentees analyzed, 31 (36.5%) held academic degrees (PhD or MPH), and 68 (80%) were medical doctors. During a median follow-up of 10 months, 51 (60%) submitted manuscripts and 46 (90%) became mentors. Ten mentees (12%) discontinued the program. The median mentor response time was 0.8 hours, with 90% responding within 24 hours.

Conclusions: A majority of participants of SRWS-PSG submitted manuscripts. This fully online RCB program might address geographical barriers and provides an adaptable approach for RCB across diverse health care contexts.

背景:卫生保健专业人员的研究能力建设(RCB)仍然有限,特别是对于那些在学术机构以外工作的人员。由于RCB基础设施不足,日本正在经历原始临床研究的下降。我们以前以医院为基础的讲习班是有效的,但面临地理和可持续性的限制。我们开发了一个完全在线的科学研究工作同伴支持小组(SRWS-PSG)模型,该模型解决了地理和时间限制,并建立了一个可持续的经济模型。学员使用网络材料,在研究问题形成后通过交流平台获得导师的支持,发表论文后转为导师。目的:我们评估我们的模型的理论效益是否转化为医疗保健专业人员RCB的实际项目有效性。方法:我们对2019年9月至2025年1月参加SRWS-PSG项目的卫生保健专业人员进行了回顾性队列研究。学员们通过结构化的模块化课程取得进展,课程涵盖了系统的审查方法,从方案制定到手稿准备,并提供个性化的指导支持。我们评估了稿件提交、项目终止、晋升为导师状态和导师响应时间。我们从节目记录和聊天记录中收集了数据。稿件提交被定义为导师确认的向同行评议期刊提交系统评论稿件。项目终止是指在稿件提交前正式退出。导师晋升被定义为在论文提交后接受邀请担任初级导师。导师响应时间是指从被指导者在聊天中提出问题到被指定的导师给出第一个答复所经过的时间。结果:85名学员中,具有博士或公共卫生硕士学位的31人(36.5%),医学博士68人(80%)。在平均10个月的随访期间,51人(60%)提交了手稿,46人(90%)成为导师。10名学员(12%)终止了该项目。导师响应时间中位数为0.8小时,90%在24小时内响应。结论:大多数SRWS-PSG参与者提交了稿件。这个完全在线的RCB项目可以解决地理障碍,并为不同卫生保健背景下的RCB提供适应性方法。
{"title":"Effectiveness of a Fully Online Scientific Research Works Peer Support Group Model for Research Capacity Building Through Conducting Systematic Reviews Among Health Care Professionals: Retrospective Cohort Studies.","authors":"Yuki Kataoka, Ryuhei So, Masahiro Banno, Yasushi Tsujimoto","doi":"10.2196/78862","DOIUrl":"10.2196/78862","url":null,"abstract":"<p><strong>Background: </strong>Research capacity building (RCB) among health care professionals remains limited, particularly for those working outside academic institutions. Japan is experiencing a decline in original clinical research due to insufficient RCB infrastructure. Our previous hospital-based workshops were effective but faced geographical and sustainability constraints. We developed a fully online Scientific Research Works Peer Support Group (SRWS-PSG) model that addresses geographical and time-bound constraints and establishes a sustainable economic model. Mentees use online materials, receive support from mentors via a communication platform after formulating their research question, and transition into mentors upon publication.</p><p><strong>Objective: </strong>We evaluated whether our model's theoretical benefits translated into actual program effectiveness in RCB among health care professionals.</p><p><strong>Methods: </strong>We conducted a retrospective cohort study of health care professionals who participated in the SRWS-PSG program between September 2019 and January 2025. Mentees progressed through a structured modular curriculum covering systematic review methodology, from protocol development to manuscript preparation, with personalized mentoring support. We evaluated manuscript submission, program discontinuation, promotion to a mentor status, and mentor response time. We collected data from program records and chat logs. Manuscript submission was defined as mentor-confirmed submission of a systematic review manuscript to a peer-reviewed journal. Program discontinuation referred to formal withdrawal before manuscript submission. Mentor promotion was defined as acceptance of an invitation to serve as a junior mentor after manuscript submission. Mentor response time was the elapsed time from a mentee's question in the chat to the first reply by an assigned mentor.</p><p><strong>Results: </strong>Of 85 mentees analyzed, 31 (36.5%) held academic degrees (PhD or MPH), and 68 (80%) were medical doctors. During a median follow-up of 10 months, 51 (60%) submitted manuscripts and 46 (90%) became mentors. Ten mentees (12%) discontinued the program. The median mentor response time was 0.8 hours, with 90% responding within 24 hours.</p><p><strong>Conclusions: </strong>A majority of participants of SRWS-PSG submitted manuscripts. This fully online RCB program might address geographical barriers and provides an adaptable approach for RCB across diverse health care contexts.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e78862"},"PeriodicalIF":3.2,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Chatbots: Moving Toward Multistep Modular AI Agents in Medical Education. 超越聊天机器人:在医学教育中走向多步骤模块化人工智能代理。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-02 DOI: 10.2196/76661
Minyang Chow, Olivia Ng

Unlabelled: The integration of large language models into medical education has significantly increased, providing valuable assistance in single-turn, isolated educational tasks. However, their utility remains limited in complex, iterative instructional workflows characteristic of clinical education. Single-prompt AI chatbots lack the necessary contextual awareness and iterative capability required for nuanced educational tasks. This Viewpoint paper argues for a shift from conventional chatbot paradigms toward a modular, multistep artificial intelligence (AI) agent framework that aligns closely with the pedagogical needs of medical educators. We propose a modular framework composed of specialized AI agents, each responsible for distinct instructional subtasks. Furthermore, these agents operate within clearly defined boundaries and are equipped with tools and resources to accomplish their tasks and ensure pedagogical continuity and coherence. Specialized agents enhance accuracy by using models optimally tailored to specific cognitive tasks, increasing the quality of outputs compared to single-model workflows. Using a clinical scenario design as an illustrative example, we demonstrate how task specialization, iterative feedback, and tool integration in an agent-based pipeline can mirror expert-driven educational processes. The framework maintains a human-in-the-loop structure, with educators reviewing and refining each output before progression, ensuring pedagogical integrity, flexibility, and transparency. Our proposed shift toward modular AI agents offers significant promise for enhancing educational workflows by delegating routine tasks to specialized systems. We encourage educators to explore how these emerging AI ecosystems could transform medical education.

未标记:将大型语言模型纳入医学教育的情况已大大增加,为单一、孤立的教育任务提供了宝贵的帮助。然而,他们的效用仍然是有限的复杂,迭代的教学工作流程的特点临床教育。单提示人工智能聊天机器人缺乏必要的上下文意识和迭代能力,需要细致的教育任务。这篇观点论文主张从传统的聊天机器人范式转向模块化、多步骤的人工智能(AI)代理框架,该框架与医学教育者的教学需求密切相关。我们提出了一个由专门的人工智能代理组成的模块化框架,每个代理负责不同的教学子任务。此外,这些机构在明确界定的范围内运作,并配备了完成任务和确保教学连续性和连贯性的工具和资源。专门的代理通过使用针对特定认知任务的优化模型来提高准确性,与单模型工作流相比,提高了输出的质量。以临床场景设计为例,我们演示了任务专门化、迭代反馈和基于代理的管道中的工具集成如何反映专家驱动的教育过程。该框架保持了一个人在循环的结构,教育工作者在进步之前审查和完善每个输出,确保教学的完整性、灵活性和透明度。我们提出的向模块化人工智能代理的转变,通过将日常任务委托给专门的系统,为加强教育工作流程提供了巨大的希望。我们鼓励教育工作者探索这些新兴的人工智能生态系统如何改变医学教育。
{"title":"Beyond Chatbots: Moving Toward Multistep Modular AI Agents in Medical Education.","authors":"Minyang Chow, Olivia Ng","doi":"10.2196/76661","DOIUrl":"10.2196/76661","url":null,"abstract":"<p><strong>Unlabelled: </strong>The integration of large language models into medical education has significantly increased, providing valuable assistance in single-turn, isolated educational tasks. However, their utility remains limited in complex, iterative instructional workflows characteristic of clinical education. Single-prompt AI chatbots lack the necessary contextual awareness and iterative capability required for nuanced educational tasks. This Viewpoint paper argues for a shift from conventional chatbot paradigms toward a modular, multistep artificial intelligence (AI) agent framework that aligns closely with the pedagogical needs of medical educators. We propose a modular framework composed of specialized AI agents, each responsible for distinct instructional subtasks. Furthermore, these agents operate within clearly defined boundaries and are equipped with tools and resources to accomplish their tasks and ensure pedagogical continuity and coherence. Specialized agents enhance accuracy by using models optimally tailored to specific cognitive tasks, increasing the quality of outputs compared to single-model workflows. Using a clinical scenario design as an illustrative example, we demonstrate how task specialization, iterative feedback, and tool integration in an agent-based pipeline can mirror expert-driven educational processes. The framework maintains a human-in-the-loop structure, with educators reviewing and refining each output before progression, ensuring pedagogical integrity, flexibility, and transparency. Our proposed shift toward modular AI agents offers significant promise for enhancing educational workflows by delegating routine tasks to specialized systems. We encourage educators to explore how these emerging AI ecosystems could transform medical education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e76661"},"PeriodicalIF":3.2,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490774/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145214003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations: Cross-Sectional Study. 提示工程对医学生考试中不同题型ChatGPT变体表现的影响:横断面研究
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-10-01 DOI: 10.2196/78320
Ming-Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou

Background: Large language models such as ChatGPT (OpenAI) have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.

Objective: This study aims to systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1-mini, and GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.

Methods: A 100-item examination dataset covering multiple choice questions, short answer questions, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineering conditions over 5 independent runs. Student cohort scores (N=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics (v29.0) with paired t tests and Cohen d (P<.05).

Results: Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1), and final scores ranged from 55% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (10.6%, P<.001) and GPT-4.0 (3.2%, P=.002) but yielded negligible gains for optimized variants (P=.07-.94). Optimized models matched or exceeded student performance on both exams.

Conclusions: Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As large language models mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of artificial intelligence as a learning companion.

背景:像ChatGPT (OpenAI)这样的大型语言模型在医学教育评估中已经显示出前景,但是跨优化变体的提示工程的比较效果和对医学生的相对表现仍然不清楚。目的:本研究旨在系统评估提示工程对五个ChatGPT变体(GPT-3.5、GPT-4.0、gpt - 40、gpt - 401 -mini和gpt - 401)的影响,并将其与四年级医学生在期中和期末考试中的表现进行比较。方法:在5次独立运行中,对每个模型进行无提示和提示工程条件下的100题考试数据集,包括多项选择题、简答题、临床病例分析和基于图像的问题。收集学生队列评分(N=143)进行比较。采用标准化标准评分,转换成百分比,并在SPSS Statistics (v29.0)中使用配对t检验和Cohen d进行分析(结果:中期基线得分范围为59.2% (GPT-3.5)至94.1% (gpt - 4.1),最终得分范围为55%至92.4%。四年级学生平均89.4%(期中)和80.2%(期末)。提示工程显著提高了GPT-3.5(10.6%),结论:提示工程提高了早期模型的性能,而高级变体固有地达到了接近上限的准确性,超过了医学生。随着大型语言模型的成熟,重点应该从提示设计转向模型选择、多模态集成以及人工智能作为学习伙伴的关键使用。
{"title":"Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations: Cross-Sectional Study.","authors":"Ming-Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou","doi":"10.2196/78320","DOIUrl":"10.2196/78320","url":null,"abstract":"<p><strong>Background: </strong>Large language models such as ChatGPT (OpenAI) have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.</p><p><strong>Objective: </strong>This study aims to systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1-mini, and GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.</p><p><strong>Methods: </strong>A 100-item examination dataset covering multiple choice questions, short answer questions, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineering conditions over 5 independent runs. Student cohort scores (N=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics (v29.0) with paired t tests and Cohen d (P<.05).</p><p><strong>Results: </strong>Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1), and final scores ranged from 55% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (10.6%, P<.001) and GPT-4.0 (3.2%, P=.002) but yielded negligible gains for optimized variants (P=.07-.94). Optimized models matched or exceeded student performance on both exams.</p><p><strong>Conclusions: </strong>Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As large language models mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of artificial intelligence as a learning companion.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e78320"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12488032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145207885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the Evolution of China's Traditional Chinese Medicine Education Policies: Insights From a BERTopic-Based Descriptive Study. 描绘中国中医教育政策的演变:基于bertopic的描述性研究的见解。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-09-25 DOI: 10.2196/72660
Tao Yang, Fan Yang, Yong Li

Background: Traditional Chinese medicine (TCM) education in China has evolved significantly, shaped by both national policy and social needs. Despite this, the academic community has yet to fully explore the long-term trends and core issues in TCM education policies. As the global interest in TCM continues to grow, understanding these trends becomes crucial for guiding future policy and educational reforms. This study used cutting-edge deep learning techniques to fill this gap, offering a novel, data-driven perspective on the evolution of TCM education policies.

Objective: This study aimed to systematically analyze the research topics and evolutionary trends in TCM education policies in China using a deep learning-based topic modeling approach, providing valuable insights to guide future policy development and educational practices.

Methods: TCM policy-related documents were collected from major sources, including the Ministry of Education, the National Administration of Traditional Chinese Medicine, PKU Lawinfo, and archives of TCM colleges. The text was preprocessed and analyzed using the BERTopic model, a state-of-the-art tool for topic modeling, to extract key themes and examine the policy development trajectory.

Results: The analysis revealed 27 core topics in TCM education policies, including medical education, curriculum reform, rural health care, internationalization, and the integration of TCM with modern education systems. These topics were clustered into 5 stages of policy evolution: marginalization, standardization, specialization, systematization, and restandardization. These stages reflect the ongoing balancing act between modernizing TCM education and preserving its traditional values, while adapting to national political, social, and economic strategies.

Conclusions: This study offers groundbreaking insights into the dynamic and multifaceted evolution of TCM education policies in China. By leveraging the BERTopic model, it provides a comprehensive framework for understanding the forces shaping TCM education and offers actionable recommendations for future policy making. The findings are essential for educators, policymakers, and researchers aiming to refine and innovate TCM education in an increasingly globalized world.

背景:受国家政策和社会需求的影响,中国的中医教育经历了巨大的发展。尽管如此,学术界尚未充分探讨中医教育政策的长期趋势和核心问题。随着全球对中医的兴趣持续增长,了解这些趋势对于指导未来的政策和教育改革至关重要。本研究利用尖端的深度学习技术填补了这一空白,为中医药教育政策的演变提供了一个新颖的、数据驱动的视角。目的:采用基于深度学习的主题建模方法,系统分析中国中医教育政策的研究主题和演变趋势,为指导未来的政策制定和教育实践提供有价值的见解。方法:从教育部、国家中医药管理局、北京大学文库、中医药院校档案等主要渠道收集中医药政策相关文件。使用BERTopic模型(一种最先进的主题建模工具)对文本进行预处理和分析,以提取关键主题并检查政策发展轨迹。结果:分析揭示了中医药教育政策的27个核心议题,包括医学教育、课程改革、农村卫生、国际化、中医药与现代教育体系的融合等。这些议题被归纳为政策演变的5个阶段:边缘化、标准化、专业化、系统化和再标准化。这些阶段反映了中医教育现代化与保留其传统价值之间的持续平衡,同时适应国家政治、社会和经济战略。结论:本研究对中国中医教育政策的动态和多方面演变提供了开创性的见解。通过利用BERTopic模型,它为理解影响中医教育的力量提供了一个全面的框架,并为未来的政策制定提供了可行的建议。这些发现对于旨在在日益全球化的世界中改进和创新中医教育的教育工作者、政策制定者和研究人员至关重要。
{"title":"Mapping the Evolution of China's Traditional Chinese Medicine Education Policies: Insights From a BERTopic-Based Descriptive Study.","authors":"Tao Yang, Fan Yang, Yong Li","doi":"10.2196/72660","DOIUrl":"10.2196/72660","url":null,"abstract":"<p><strong>Background: </strong>Traditional Chinese medicine (TCM) education in China has evolved significantly, shaped by both national policy and social needs. Despite this, the academic community has yet to fully explore the long-term trends and core issues in TCM education policies. As the global interest in TCM continues to grow, understanding these trends becomes crucial for guiding future policy and educational reforms. This study used cutting-edge deep learning techniques to fill this gap, offering a novel, data-driven perspective on the evolution of TCM education policies.</p><p><strong>Objective: </strong>This study aimed to systematically analyze the research topics and evolutionary trends in TCM education policies in China using a deep learning-based topic modeling approach, providing valuable insights to guide future policy development and educational practices.</p><p><strong>Methods: </strong>TCM policy-related documents were collected from major sources, including the Ministry of Education, the National Administration of Traditional Chinese Medicine, PKU Lawinfo, and archives of TCM colleges. The text was preprocessed and analyzed using the BERTopic model, a state-of-the-art tool for topic modeling, to extract key themes and examine the policy development trajectory.</p><p><strong>Results: </strong>The analysis revealed 27 core topics in TCM education policies, including medical education, curriculum reform, rural health care, internationalization, and the integration of TCM with modern education systems. These topics were clustered into 5 stages of policy evolution: marginalization, standardization, specialization, systematization, and restandardization. These stages reflect the ongoing balancing act between modernizing TCM education and preserving its traditional values, while adapting to national political, social, and economic strategies.</p><p><strong>Conclusions: </strong>This study offers groundbreaking insights into the dynamic and multifaceted evolution of TCM education policies in China. By leveraging the BERTopic model, it provides a comprehensive framework for understanding the forces shaping TCM education and offers actionable recommendations for future policy making. The findings are essential for educators, policymakers, and researchers aiming to refine and innovate TCM education in an increasingly globalized world.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e72660"},"PeriodicalIF":3.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12463338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145150792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Health Care Professionals' Knowledge, Attitude, Practice, and Infrastructure Accessibility for e-Learning in Ethiopia: Cross-Sectional Study. 埃塞俄比亚卫生保健专业人员的知识、态度、实践和电子学习的基础设施可及性:横断面研究
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-09-25 DOI: 10.2196/65598
Sophie Sarah Rossner, Muluken Gizaw, Sefonias Getachew, Eyerusalem Getachew, Alemnew Destaw, Sarah Negash, Lena Bauer, Eva Susanne Marion Hermann, Abel Shita, Susanne Unverzagt, Pablo Sandro Carvalho Santos, Eva Johanna Kantelhardt, Eric Sven Kroeber

Background: Training of health care professionals and their participation in continuous medical education are crucial to ensure quality health care. Low-resource countries in Sub-Saharan Africa struggle with health care disparities between urban and rural areas concerning access to educational resources. While e-learning can facilitate a wide distribution of educational content, it depends on learners' engagement and infrastructure.

Objective: This study aims to assess knowledge, attitude, practice, and access to infrastructure related to e-learning among health care professionals in primary health care settings in Ethiopia.

Methods: In April 2023, we carried out a quantitative, questionnaire-based cross-sectional study guided by the knowledge, attitudes, and practice framework, including additional items on available infrastructure. The scores in each category are defined as "high" and "low" based on the median, followed by the application of logistic regression on selected sociodemographic factors. We included health care professionals working in general and primary hospitals, health centers, and health posts.

Results: Of 398 participants (response rate 94.5%), more than half (n=207, 52%) reported feeling confident about their understanding of e-learning and conducting online searches, both for general (n=247, 62.1%) and medical-related content (n=251, 63.1%). Higher levels of education were associated with better knowledge (adjusted odds ratio [AOR] 2.32, 95% CI 1.45-3.68). Regardless of financial and personal efforts, we observed a generally positive attitude. Almost half of the participants (n=172, 43.2%) reported using the internet daily, compared to 16.8% (n=67) of participants who never used the internet. Higher education (AOR 2.56, 95% CI 1.57-4.16) and income levels (AOR 1.31, 95% CI 1.06-1.62) were associated with higher practice scores of e-learning-related activities. Women, however, exhibited lower practice scores (AOR 0.44, 95% CI 0.27-0.71). Regular access to an internet-enabled device was reported by 43.5% (n=173) of the participants. Smartphones were the primarily used device (268/393, 67.3%). Common barriers to internet access were limited internet availability (142/437, 32.5%) and costs (n=190, 43.5%). Higher education (AOR 1.56, 95% CI 0.98, 2.46) and income (AOR 1.50; 95% CI 1.21-1.85) were associated with increased access to infrastructure, while it was decreased for women (AOR 0.48, 95% CI 0.30-0.77).

Conclusions: Although Ethiopian health care professionals report mixed levels of knowledge, they have a positive attitude toward e-learning in medical education. While internet use is common, especially via smartphone, the access to devices and reliable internet is limited. To improve accessibility, investments in the digital infrastructure and individual digital education programs are necessary, especially targetin

背景:卫生保健专业人员的培训和他们参与持续医学教育是确保高质量卫生保健的关键。撒哈拉以南非洲资源匮乏国家努力解决城乡之间在获得教育资源方面的保健差距问题。虽然电子学习可以促进教育内容的广泛传播,但它取决于学习者的参与和基础设施。目的:本研究旨在评估埃塞俄比亚初级卫生保健机构中卫生保健专业人员与电子学习相关的知识、态度、做法和基础设施的获取情况。方法:在知识、态度和实践框架的指导下,我们于2023年4月进行了一项定量的、基于问卷的横断面研究,包括关于可用基础设施的附加项目。每个类别的得分根据中位数定义为“高”和“低”,然后对选定的社会人口因素应用逻辑回归。我们包括在综合医院和初级医院、卫生中心和卫生站工作的卫生保健专业人员。结果:在398名参与者(回复率94.5%)中,超过一半(n=207, 52%)表示对他们对电子学习和在线搜索的理解充满信心,包括一般内容(n=247, 62.1%)和医学相关内容(n=251, 63.1%)。较高的教育水平与更好的知识相关(调整优势比[AOR] 2.32, 95% CI 1.45-3.68)。不管经济和个人努力如何,我们观察到总体上是积极的态度。几乎一半的参与者(n=172, 43.2%)报告每天使用互联网,相比之下,16.8% (n=67)的参与者从不使用互联网。高等教育(AOR 2.56, 95% CI 1.57-4.16)和收入水平(AOR 1.31, 95% CI 1.06-1.62)与更高的电子学习相关活动实践得分相关。然而,女性表现出较低的练习得分(AOR 0.44, 95% CI 0.27-0.71)。43.5% (n=173)的参与者报告说他们经常使用能上网的设备。智能手机是主要的使用设备(268/393,67.3%)。互联网接入的常见障碍是有限的互联网可用性(142/437,32.5%)和成本(n=190, 43.5%)。高等教育(AOR为1.56,95% CI为0.98,2.46)和收入(AOR为1.50,95% CI为1.21-1.85)与获得基础设施的机会增加有关,而女性获得基础设施的机会减少(AOR为0.48,95% CI为0.30-0.77)。结论:尽管埃塞俄比亚卫生保健专业人员报告的知识水平参差不齐,但他们对医学教育中的电子学习持积极态度。虽然互联网使用很普遍,尤其是通过智能手机,但访问设备和可靠的互联网是有限的。为了改善可及性,有必要投资于数字基础设施和个人数字教育项目,特别是针对妇女和低收入群体。由于它们的广泛可用性,电子学习程序应该针对智能手机进行优化。
{"title":"Health Care Professionals' Knowledge, Attitude, Practice, and Infrastructure Accessibility for e-Learning in Ethiopia: Cross-Sectional Study.","authors":"Sophie Sarah Rossner, Muluken Gizaw, Sefonias Getachew, Eyerusalem Getachew, Alemnew Destaw, Sarah Negash, Lena Bauer, Eva Susanne Marion Hermann, Abel Shita, Susanne Unverzagt, Pablo Sandro Carvalho Santos, Eva Johanna Kantelhardt, Eric Sven Kroeber","doi":"10.2196/65598","DOIUrl":"10.2196/65598","url":null,"abstract":"<p><strong>Background: </strong>Training of health care professionals and their participation in continuous medical education are crucial to ensure quality health care. Low-resource countries in Sub-Saharan Africa struggle with health care disparities between urban and rural areas concerning access to educational resources. While e-learning can facilitate a wide distribution of educational content, it depends on learners' engagement and infrastructure.</p><p><strong>Objective: </strong>This study aims to assess knowledge, attitude, practice, and access to infrastructure related to e-learning among health care professionals in primary health care settings in Ethiopia.</p><p><strong>Methods: </strong>In April 2023, we carried out a quantitative, questionnaire-based cross-sectional study guided by the knowledge, attitudes, and practice framework, including additional items on available infrastructure. The scores in each category are defined as \"high\" and \"low\" based on the median, followed by the application of logistic regression on selected sociodemographic factors. We included health care professionals working in general and primary hospitals, health centers, and health posts.</p><p><strong>Results: </strong>Of 398 participants (response rate 94.5%), more than half (n=207, 52%) reported feeling confident about their understanding of e-learning and conducting online searches, both for general (n=247, 62.1%) and medical-related content (n=251, 63.1%). Higher levels of education were associated with better knowledge (adjusted odds ratio [AOR] 2.32, 95% CI 1.45-3.68). Regardless of financial and personal efforts, we observed a generally positive attitude. Almost half of the participants (n=172, 43.2%) reported using the internet daily, compared to 16.8% (n=67) of participants who never used the internet. Higher education (AOR 2.56, 95% CI 1.57-4.16) and income levels (AOR 1.31, 95% CI 1.06-1.62) were associated with higher practice scores of e-learning-related activities. Women, however, exhibited lower practice scores (AOR 0.44, 95% CI 0.27-0.71). Regular access to an internet-enabled device was reported by 43.5% (n=173) of the participants. Smartphones were the primarily used device (268/393, 67.3%). Common barriers to internet access were limited internet availability (142/437, 32.5%) and costs (n=190, 43.5%). Higher education (AOR 1.56, 95% CI 0.98, 2.46) and income (AOR 1.50; 95% CI 1.21-1.85) were associated with increased access to infrastructure, while it was decreased for women (AOR 0.48, 95% CI 0.30-0.77).</p><p><strong>Conclusions: </strong>Although Ethiopian health care professionals report mixed levels of knowledge, they have a positive attitude toward e-learning in medical education. While internet use is common, especially via smartphone, the access to devices and reliable internet is limited. To improve accessibility, investments in the digital infrastructure and individual digital education programs are necessary, especially targetin","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e65598"},"PeriodicalIF":3.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12463343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145150821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
JMIR Medical Education
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1