首页 > 最新文献

Medical Teacher最新文献

英文 中文
Enabling diagnostic excellence in the real world: Managing complexity, uncertainty and clinical responsibility. 在现实世界中实现卓越诊断:管理复杂性、不确定性和临床责任。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-09-16 DOI: 10.1080/0142159X.2024.2402032
Nicola Cunningham, Helmy Cook, Julia Harrison

Diagnostic error is a significant category within preventable patient harm, and it takes many years of effort to develop proficiency in diagnostic reasoning. One of the key challenges medical schools must address is preparing students for the complexity, uncertainty and clinical responsibility in going from student to doctor. Recognising the importance of both cognitive and systems-related factors in diagnostic accuracy, we designed the QUID Prompt (Questions to Use for Improving Diagnosis) for students to refer to at the bedside. This set of questions prompts careful consideration, analysis, and signposting of decision-making processes, to assist students in transitioning from medical school to the real-world of work and achieving diagnostic excellence in clinical settings.

诊断错误是可预防的患者伤害中的一个重要类别,需要多年的努力才能培养出熟练的诊断推理能力。医学院必须应对的主要挑战之一,是让学生做好准备,应对从学生到医生这一过程中的复杂性、不确定性和临床责任。认识到认知因素和系统相关因素对诊断准确性的重要性,我们设计了 QUID 提示(用于改进诊断的问题),供学生在床边参考。这组问题促使学生仔细考虑、分析和指明决策过程,以帮助学生从医学院过渡到实际工作中,并在临床环境中实现卓越诊断。
{"title":"Enabling diagnostic excellence in the real world: Managing complexity, uncertainty and clinical responsibility.","authors":"Nicola Cunningham, Helmy Cook, Julia Harrison","doi":"10.1080/0142159X.2024.2402032","DOIUrl":"10.1080/0142159X.2024.2402032","url":null,"abstract":"<p><p>Diagnostic error is a significant category within preventable patient harm, and it takes many years of effort to develop proficiency in diagnostic reasoning. One of the key challenges medical schools must address is preparing students for the complexity, uncertainty and clinical responsibility in going from student to doctor. Recognising the importance of both cognitive and systems-related factors in diagnostic accuracy, we designed the QUID Prompt (Questions to Use for Improving Diagnosis) for students to refer to at the bedside. This set of questions prompts careful consideration, analysis, and signposting of decision-making processes, to assist students in transitioning from medical school to the real-world of work and achieving diagnostic excellence in clinical settings.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"404-406"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142291255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical and biophysical markers of assessment in medical training: A scoping review of the literature. 医学培训中的物理和生物物理评估指标:文献综述。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-04-30 DOI: 10.1080/0142159X.2024.2345269
Danielle T Miller, Sarah Michael, Colin Bell, Cody H Brevik, Bonnie Kaplan, Ellie Svoboda, John Kendall

Purpose: Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training.

Materials and methods: The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023.

Results: Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (n = 160, 36%) followed by visual attention (n = 143, 32%), hand motion analysis (n = 67, 15%), salivary cortisol (n = 67, 15%), fMRI (n = 29, 7%), skin conductivity (n = 26, 6%), fNIRs (n = 19, 4%), and pupillometry (n = 16, 4%). The majority of studies (n = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (n = 155, 35%), knowledge (n = 114, 26%), and attitudinal skills (n = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert.

Conclusion: This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.

目的:随着时间的推移,医学教育中的评估已发生变化,以衡量当前医疗实践中不断发展的技能要求。物理和生物物理评估指标试图利用技术深入了解医学学员的知识、技能和态度。作者对医学培训中使用物理和生物物理评估指标的文献进行了范围界定:作者于 2022 年 8 月 1 日在七个数据库中检索了在评估医学学员(医学生、住院医师、研究员以及其他国家使用的同义词)时使用物理或生物物理指标的出版物。物理或生物物理标记包括:心率和心率变异性、视觉跟踪和注意力、瞳孔测量、手部运动分析、皮肤电导率、唾液皮质醇、功能性磁共振成像(fMRI)和功能性近红外光谱(fNIRS)。作者使用布卢姆的知识、技能和态度分类法对相关文献进行了映射,并提取了 2023 年 2 月至 6 月期间的其他数据,包括研究设计、研究环境以及新手与专家的区别:在 6069 篇文章中,有 443 篇符合纳入标准。大多数研究使用心率变异性(n = 160,36%)对学员进行评估,其次是视觉注意力(n = 143,32%)、手部运动分析(n = 67,15%)、唾液皮质醇(n = 67,15%)、fMRI(n = 29,7%)、皮肤电导率(n = 26,6%)、fNIRs(n = 19,4%)和瞳孔测量(n = 16,4%)。大多数研究(n = 167,38%)分析了非技术技能,其次是分析技术技能(n = 155,35%)、知识(n = 114,26%)和态度技能(n = 61,14%)的研究。169项研究(38%)试图使用物理或生物物理标记来区分新手和专家:本综述全面描述了目前在医学教育培训中使用物理和生物物理标记的情况,包括目前使用的技术和评估的技能。此外,虽然物理和生物物理标记有可能增强目前医学教育中的评估,但围绕这些评估标记的可靠性、有效性、成本、实用性和对教育的影响等方面的研究仍有很大差距。
{"title":"Physical and biophysical markers of assessment in medical training: A scoping review of the literature.","authors":"Danielle T Miller, Sarah Michael, Colin Bell, Cody H Brevik, Bonnie Kaplan, Ellie Svoboda, John Kendall","doi":"10.1080/0142159X.2024.2345269","DOIUrl":"10.1080/0142159X.2024.2345269","url":null,"abstract":"<p><strong>Purpose: </strong>Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training.</p><p><strong>Materials and methods: </strong>The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023.</p><p><strong>Results: </strong>Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (<i>n</i> = 160, 36%) followed by visual attention (<i>n</i> = 143, 32%), hand motion analysis (<i>n</i> = 67, 15%), salivary cortisol (<i>n</i> = 67, 15%), fMRI (<i>n</i> = 29, 7%), skin conductivity (<i>n</i> = 26, 6%), fNIRs (<i>n</i> = 19, 4%), and pupillometry (<i>n</i> = 16, 4%). The majority of studies (<i>n</i> = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (<i>n</i> = 155, 35%), knowledge (<i>n</i> = 114, 26%), and attitudinal skills (<i>n</i> = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert.</p><p><strong>Conclusion: </strong>This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"436-444"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140861971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feeling the responsibility: Exploring the emotional experiences of final-year medical students when carrying out clinical tasks. 感受责任:探索毕业班医学生在执行临床任务时的情感体验。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-05-21 DOI: 10.1080/0142159X.2024.2351137
Miriam Alexander, Ronja Behrend, Anne Franz, Harm Peters

Purpose: The concept of Entrustable Professional Activities (EPA) is increasingly used to operationalize learning in the clinical workplace, yet little is known about the emotions of learners feeling the responsibility when carrying out professional tasks.

Methods: We explored the emotional experiences of medical students in their final clerkship year when performing clinical tasks. We used an online reflective diary. Text entries were analysed using inductive-deductive content analysis with reference to the EPA framework and the control-value theory of achievement emotions.

Results: Students described a wide range of emotions related to carrying out various clinical tasks. They reported positive-activating emotions, ranging from enjoyment to relaxation, and negative-deactivating emotions, ranging from anxiety to boredom. Emotions varied across individual students and were related to the characteristics of a task, an increasing level of autonomy, the students' perceived ability to perform a task and the level of supervision provided.

Discussion: Emotions are widely present and impact on the workplace learning of medical students which is related to key elements of the EPA framework. Supervisors play a key role in eliciting positive-activating emotions and the motivation to learn by providing a level of supervision and guidance appropriate to the students' perceived ability to perform the task.

目的:"可委托专业活动"(Entrustable Professional Activities,EPA)的概念越来越多地被用于临床工作场所的学习操作,但人们对学习者在执行专业任务时感受责任的情绪却知之甚少:我们探讨了最后一年实习的医学生在执行临床任务时的情感体验。我们使用了在线反思日记。参照 EPA 框架和成就情绪的控制价值理论,使用归纳-演绎内容分析法对文本条目进行了分析:结果:学生们描述了与执行各种临床任务相关的各种情绪。他们报告了从享受到放松的积极激活情绪,以及从焦虑到无聊的消极激活情绪。学生的情绪因人而异,并与任务的特点、自主程度的提高、学生对完成任务能力的感知以及提供的监督水平有关:讨论:情绪广泛存在并影响医学生的工作场所学习,这与 EPA 框架的关键要素有关。督导人员通过提供与学生完成任务的感知能力相适应的督导和指导水平,在激发积极活跃的情绪和学习动机方面发挥着关键作用。
{"title":"Feeling the responsibility: Exploring the emotional experiences of final-year medical students when carrying out clinical tasks.","authors":"Miriam Alexander, Ronja Behrend, Anne Franz, Harm Peters","doi":"10.1080/0142159X.2024.2351137","DOIUrl":"10.1080/0142159X.2024.2351137","url":null,"abstract":"<p><strong>Purpose: </strong>The concept of Entrustable Professional Activities (EPA) is increasingly used to operationalize learning in the clinical workplace, yet little is known about the emotions of learners feeling the responsibility when carrying out professional tasks.</p><p><strong>Methods: </strong>We explored the emotional experiences of medical students in their final clerkship year when performing clinical tasks. We used an online reflective diary. Text entries were analysed using inductive-deductive content analysis with reference to the EPA framework and the control-value theory of achievement emotions.</p><p><strong>Results: </strong>Students described a wide range of emotions related to carrying out various clinical tasks. They reported positive-activating emotions, ranging from enjoyment to relaxation, and negative-deactivating emotions, ranging from anxiety to boredom. Emotions varied across individual students and were related to the characteristics of a task, an increasing level of autonomy, the students' perceived ability to perform a task and the level of supervision provided.</p><p><strong>Discussion: </strong>Emotions are widely present and impact on the workplace learning of medical students which is related to key elements of the EPA framework. Supervisors play a key role in eliciting positive-activating emotions and the motivation to learn by providing a level of supervision and guidance appropriate to the students' perceived ability to perform the task.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"513-520"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141076241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of candidates' race on examiners' ratings in standardised assessments of clinical practice. 在临床实践标准化评估中,考生的种族对考官评分的影响。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-05-21 DOI: 10.1080/0142159X.2024.2345266
Celia Brown, Sarah Khavandi, Ann Sebastian, Kerry Badger, Rachel Westacott, Malcolm W R Reed, Mark Gurnell, Amir H Sam

Purpose: Delivering fair and reliable summative assessments in medical education assumes examiner decision making is devoid of bias. We investigated whether candidate racial appearances influenced examiner ratings in undergraduate clinical exams.

Methods: We used an internet-based design. Examiners watched a randomised set of six videos of three different white candidates and three different non-white (Asian, black and Chinese) candidates taking a clinical history at either fail, borderline or pass grades. We compared the median and interquartile range (IQR) of the paired difference between scores for the white and non-white candidates at each performance grade and tested for statistical significance.

Results: 160 Examiners participated. At the fail grade, the black and Chinese candidates scored lower than the white candidate, with median paired differences of -2.5 and -1 respectively (both p < 0.001). At the borderline grade, the black and Chinese candidates scored higher than the white candidate, with median paired differences of +2 and +3, respectively (both p < 0.001). At the passing grade, the Asian candidate scored lower than the white candidate (median paired difference -1, p < 0.001).

Conclusion: The racial appearance of candidates appeared to influence the scores awarded by examiners, but not in a uniform manner.

目的:在医学教育中进行公平、可靠的终结性评估时,考官的决策必须不带偏见。我们调查了在本科临床考试中,考生的种族外表是否会影响考官的评分:方法:我们采用了基于互联网的设计。方法:我们采用了基于互联网的设计,考官随机观看了一组六段视频,分别是三名不同的白人考生和三名不同的非白人(亚洲人、黑人和中国人)考生在不及格、临界或及格等级时的临床病史。我们比较了白人和非白人考生在每个成绩等级上的分数配对差异的中位数和四分位距(IQR),并进行了统计学意义检验:共有 160 名考官参加。在不及格等级中,黑人和华裔考生的得分低于白人考生,配对差异的中位数分别为-2.5和-1(均为p p p p 结论:黑人和华裔考生的得分低于白人考生,配对差异的中位数分别为-2.5和-1:考生的种族外貌似乎会影响考官的评分,但影响的方式并不一致。
{"title":"The influence of candidates' race on examiners' ratings in standardised assessments of clinical practice.","authors":"Celia Brown, Sarah Khavandi, Ann Sebastian, Kerry Badger, Rachel Westacott, Malcolm W R Reed, Mark Gurnell, Amir H Sam","doi":"10.1080/0142159X.2024.2345266","DOIUrl":"10.1080/0142159X.2024.2345266","url":null,"abstract":"<p><strong>Purpose: </strong>Delivering fair and reliable summative assessments in medical education assumes examiner decision making is devoid of bias. We investigated whether candidate racial appearances influenced examiner ratings in undergraduate clinical exams.</p><p><strong>Methods: </strong>We used an internet-based design. Examiners watched a randomised set of six videos of three different white candidates and three different non-white (Asian, black and Chinese) candidates taking a clinical history at either fail, borderline or pass grades. We compared the median and interquartile range (IQR) of the paired difference between scores for the white and non-white candidates at each performance grade and tested for statistical significance.</p><p><strong>Results: </strong>160 Examiners participated. At the fail grade, the black and Chinese candidates scored lower than the white candidate, with median paired differences of -2.5 and -1 respectively (both <i>p</i> < 0.001). At the borderline grade, the black and Chinese candidates scored higher than the white candidate, with median paired differences of +2 and +3, respectively (both <i>p</i> < 0.001). At the passing grade, the Asian candidate scored lower than the white candidate (median paired difference -1, <i>p</i> < 0.001).</p><p><strong>Conclusion: </strong>The racial appearance of candidates appeared to influence the scores awarded by examiners, but not in a uniform manner.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"492-497"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141076245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bringing competency-based communication training to scale: A multi-institutional virtual simulation-based mastery learning curriculum for Emergency Medicine residents. 将基于能力的沟通培训规模化:为急诊科住院医师设计的多机构虚拟模拟掌握学习课程。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-05-28 DOI: 10.1080/0142159X.2024.2345267
Laurie M Aluce, Julie J Cooper, Lillian Liang Emlet, Elaine R Cohen, Simon J Ostrowski, Gordon J Wood, Julia H Vermylen

Purpose: Serious illness communication skills are essential for physicians, yet competency-based training is lacking. We address scalability barriers to competency-based communication skills training by assessing the feasibility of a multi-center, virtual simulation-based mastery learning (vSBML) curriculum on breaking bad news (BBN).

Methods: First-year emergency medicine residents at three academic medical centers participated in the virtual curriculum. Participants completed a pretest with a standardized patient (SP), a workshop with didactics and small group roleplay with SPs, a posttest with an SP, and additional deliberate practice sessions if needed to achieve the minimum passing standard (MPS). Participants were assessed using a previously published BBN assessment tool that included a checklist and scaled items. Authors compared pre- and posttests to evaluate the impact of the curriculum.

Results: Twenty-eight (90%) of 31 eligible residents completed the curriculum. Eighty-nine percent of participants did not meet the MPS at pretest. Post-intervention, there was a statistically significant improvement in checklist performance (Median= 93% vs. 53%, p < 0.001) and on all scaled items assessing quality of communication. All participants ultimately achieved the MPS.

Conclusions: A multi-site vSBML curriculum brought all participants to mastery in the core communication skill of BBN and represents a feasible, scalable model to incorporate competency-based communication skills education in a widespread manner.

目的:重病沟通技巧对医生来说至关重要,但却缺乏基于能力的培训。我们通过评估多中心、基于虚拟模拟的掌握学习(vSBML)课程的可行性来解决基于能力的沟通技巧培训的可扩展性障碍:方法:三个学术医学中心的一年级急诊科住院医师参加了虚拟课程。参加者与一名标准化病人(SP)一起完成了前测,与SP一起完成了包含说教和小组角色扮演的研讨会,与SP一起完成了后测,并根据需要完成了额外的刻意练习课程,以达到最低合格标准(MPS)。参加者使用之前发布的 BBN 评估工具进行评估,该工具包括核对表和比例项目。作者比较了前测和后测,以评估课程的影响:31 名符合条件的住院医师中有 28 人(90%)完成了课程。89% 的参与者在前测中未达到 MPS 标准。干预后,检查表的表现有了明显改善(中位数= 93% 对 53%,P 结论):多站点 vSBML 课程使所有参与者掌握了 BBN 核心交流技能,是一种可行的、可扩展的模式,可广泛纳入基于能力的交流技能教育。
{"title":"Bringing competency-based communication training to scale: A multi-institutional virtual simulation-based mastery learning curriculum for Emergency Medicine residents.","authors":"Laurie M Aluce, Julie J Cooper, Lillian Liang Emlet, Elaine R Cohen, Simon J Ostrowski, Gordon J Wood, Julia H Vermylen","doi":"10.1080/0142159X.2024.2345267","DOIUrl":"10.1080/0142159X.2024.2345267","url":null,"abstract":"<p><strong>Purpose: </strong>Serious illness communication skills are essential for physicians, yet competency-based training is lacking. We address scalability barriers to competency-based communication skills training by assessing the feasibility of a multi-center, virtual simulation-based mastery learning (vSBML) curriculum on breaking bad news (BBN).</p><p><strong>Methods: </strong>First-year emergency medicine residents at three academic medical centers participated in the virtual curriculum. Participants completed a pretest with a standardized patient (SP), a workshop with didactics and small group roleplay with SPs, a posttest with an SP, and additional deliberate practice sessions if needed to achieve the minimum passing standard (MPS). Participants were assessed using a previously published BBN assessment tool that included a checklist and scaled items. Authors compared pre- and posttests to evaluate the impact of the curriculum.</p><p><strong>Results: </strong>Twenty-eight (90%) of 31 eligible residents completed the curriculum. Eighty-nine percent of participants did not meet the MPS at pretest. Post-intervention, there was a statistically significant improvement in checklist performance (Median= 93% vs. 53%, <i>p</i> < 0.001) and on all scaled items assessing quality of communication. All participants ultimately achieved the MPS.</p><p><strong>Conclusions: </strong>A multi-site vSBML curriculum brought all participants to mastery in the core communication skill of BBN and represents a feasible, scalable model to incorporate competency-based communication skills education in a widespread manner.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"505-512"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141157978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the gap in teaching self-regulated learning: A call for deeper integration. 缩小自律学习教学的差距:呼吁深化整合。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-11-02 DOI: 10.1080/0142159X.2024.2422009
Supianto
{"title":"Bridging the gap in teaching self-regulated learning: A call for deeper integration.","authors":"Supianto","doi":"10.1080/0142159X.2024.2422009","DOIUrl":"10.1080/0142159X.2024.2422009","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"572-573"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind spots in medical education - International perspectives. 医学教育的盲点--国际视角。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-04-30 DOI: 10.1080/0142159X.2024.2345271
Sean Tackett, Yvonne Steinert, Susan Mirabal, Darcy A Reed, Cynthia R Whitehead, Scott M Wright

Background: All individuals and groups have blind spots that can create problems if unaddressed. The goal of this study was to examine blind spots in medical education from international perspectives.

Methods: From December 2022 to March 2023, we distributed an electronic survey through international networks of medical students, postgraduate trainees, and medical educators. Respondents named blind spots affecting their medical education system and then rated nine blind spot domains from a study of U.S. medical education along five-point Likert-type scales (1 = much less attention needed; 5 = much more attention needed). We tested for differences between blind spot ratings by respondent groups. We also analyzed the blind spots that respondents identified to determine those not previously described and performed content analysis on open-ended responses about blind spot domains.

Results: There were 356 respondents from 88 countries, including 127 (44%) educators, 80 (28%) medical students, and 33 (11%) postgraduate trainees. At least 80% of respondents rated each blind spot domain as needing 'more' or 'much more' attention; the highest was 88% for 'Patient perspectives and voices that are not heard, valued, or understood.' In analyses by gender, role in medical education, World Bank country income level, and region, a mean difference of 0.5 was seen in only five of the possible 279 statistical comparisons. Of 885 blind spots documented, new blind spot areas related to issues that crossed national boundaries (e.g. international standards) and the sufficiency of resources to support medical education. Comments about the nine blind spot domains illustrated that cultural, health system, and governmental elements influenced how blind spots are manifested across different settings.

Discussion: There may be general agreement throughout the world about blind spots in medical education that deserve more attention. This could establish a basis for coordinated international effort to allocate resources and tailor interventions that advance medical education.

背景:所有个人和群体都有盲点,如果不加以解决,就会产生问题。本研究旨在从国际视角审视医学教育中的盲点:从 2022 年 12 月到 2023 年 3 月,我们通过医科学生、研究生学员和医学教育工作者的国际网络发放了一份电子调查问卷。受访者说出了影响其医学教育体系的盲点,然后根据美国医学教育研究中的九个盲点领域,按照五点李克特量表进行评分(1 = 不需要太多关注;5 = 需要更多关注)。我们测试了不同受访者群体对盲点评价的差异。我们还对受访者指出的盲点进行了分析,以确定那些以前没有描述过的盲点,并对有关盲点领域的开放式回答进行了内容分析:来自 88 个国家的 356 名受访者,包括 127 名(44%)教育工作者、80 名(28%)医学生和 33 名(11%)研究生学员。至少有 80% 的受访者认为每个盲点领域都需要 "更多 "或 "更多 "的关注;"病人的观点和声音未被倾听、重视或理解 "的受访者比例最高,达到 88%。在按性别、在医学教育中的角色、世界银行国家收入水平和地区进行的分析中,在可能进行的 279 次统计比较中,只有 5 次出现了 0.5 的平均差异。在记录的 885 个盲点中,新的盲点领域涉及跨越国界的问题(如国际标准)和支持医学教育的资源是否充足。对九个盲点领域的评论表明,文化、卫生系统和政府因素影响着盲点在不同环境下的表现形式:讨论:全世界对医学教育盲点的看法可能基本一致,这些盲点值得更多关注。这可以为协调国际努力奠定基础,以分配资源和定制干预措施,促进医学教育的发展。
{"title":"Blind spots in medical education - International perspectives.","authors":"Sean Tackett, Yvonne Steinert, Susan Mirabal, Darcy A Reed, Cynthia R Whitehead, Scott M Wright","doi":"10.1080/0142159X.2024.2345271","DOIUrl":"10.1080/0142159X.2024.2345271","url":null,"abstract":"<p><strong>Background: </strong>All individuals and groups have blind spots that can create problems if unaddressed. The goal of this study was to examine blind spots in medical education from international perspectives.</p><p><strong>Methods: </strong>From December 2022 to March 2023, we distributed an electronic survey through international networks of medical students, postgraduate trainees, and medical educators. Respondents named blind spots affecting their medical education system and then rated nine blind spot domains from a study of U.S. medical education along five-point Likert-type scales (1 = much less attention needed; 5 = much more attention needed). We tested for differences between blind spot ratings by respondent groups. We also analyzed the blind spots that respondents identified to determine those not previously described and performed content analysis on open-ended responses about blind spot domains.</p><p><strong>Results: </strong>There were 356 respondents from 88 countries, including 127 (44%) educators, 80 (28%) medical students, and 33 (11%) postgraduate trainees. At least 80% of respondents rated each blind spot domain as needing 'more' or 'much more' attention; the highest was 88% for 'Patient perspectives and voices that are not heard, valued, or understood.' In analyses by gender, role in medical education, World Bank country income level, and region, a mean difference of 0.5 was seen in only five of the possible 279 statistical comparisons. Of 885 blind spots documented, new blind spot areas related to issues that crossed national boundaries (e.g. international standards) and the sufficiency of resources to support medical education. Comments about the nine blind spot domains illustrated that cultural, health system, and governmental elements influenced how blind spots are manifested across different settings.</p><p><strong>Discussion: </strong>There may be general agreement throughout the world about blind spots in medical education that deserve more attention. This could establish a basis for coordinated international effort to allocate resources and tailor interventions that advance medical education.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"498-504"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of a competence-based approach for clerkship teaching under alternating clinical placements: An explanatory sequential mixed-methods research. 交替临床实习中以能力为基础的实习教学方法的效果:解释性顺序混合方法研究。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-05-31 DOI: 10.1080/0142159X.2024.2356830
Da-Ya Yang, Xiao-Dong Zhuang, Jun-Xun Li, Jing-Zhou Jiang, Yue Guo, Xiao-Yu Zhang, Jun Liu, Wei Chen, Xin-Xue Liao, David C M Taylor

Background: It is unclear whether alternating placements during clinical clerkship, without an explicit emphasis on clinical competencies, would bring about optimal educational outcomes.

Methods: This is an explanatory sequential mixed-methods research. We enrolled a convenience sample of 41 eight-year programme medical students in Sun Yat-sen University who received alternating placements during clerkship. The effects of competence-based approach (n = 21) versus traditional approach (n = 20) to clerkship teaching were compared. In the quantitative phase, course satisfaction was measured via an online survey and academic performance was determined through final scores on summative assessment. Then, in the qualitative phase, students were invited for semi-structured interviews about their learning experiences, and the transcripts were used for thematic analysis.

Results: Quantitative findings showed that students in the study group rated high course satisfaction and performed significantly better in their final scores compared with those in the control group. Qualitative findings from thematic analysis showed that students were relatively neutral about their preference on placement models, but clearly perceived, capitalised, and appreciated that their competencies were being cultivated by an instructor who was regarded as a positive role model.

Conclusion: A competence-based approach to clerkship teaching resulted in better course satisfaction and academic performance, and was perceived, capitalised, and appreciated by students.

背景目前尚不清楚在临床实习期间交替实习,而不明确强调临床能力,是否会带来最佳的教育效果:这是一项解释性顺序混合方法研究。我们对中山大学 41 名八年制医学生进行了方便抽样调查,这些学生在实习期间接受了交替实习。我们比较了基于能力的实习教学法(21 人)和传统实习教学法(20 人)的效果。在定量阶段,课程满意度通过在线调查进行测量,学习成绩则通过终结性评估的最终得分来确定。然后,在定性阶段,邀请学生就其学习经历进行半结构化访谈,并对访谈记录进行主题分析:定量研究结果显示,与对照组相比,研究组学生对课程的满意度较高,期末成绩也明显好于对照组。专题分析的定性结果表明,学生对实习模式的偏好相对中立,但他们清楚地认识到,他们的能力得到了被视为积极榜样的指导教师的培养,并对此表示赞赏:结论:以能力为基础的实习教学方法提高了课程满意度和学业成绩,并得到了学生的认可、肯定和赞赏。
{"title":"Effects of a competence-based approach for clerkship teaching under alternating clinical placements: An explanatory sequential mixed-methods research.","authors":"Da-Ya Yang, Xiao-Dong Zhuang, Jun-Xun Li, Jing-Zhou Jiang, Yue Guo, Xiao-Yu Zhang, Jun Liu, Wei Chen, Xin-Xue Liao, David C M Taylor","doi":"10.1080/0142159X.2024.2356830","DOIUrl":"10.1080/0142159X.2024.2356830","url":null,"abstract":"<p><strong>Background: </strong>It is unclear whether alternating placements during clinical clerkship, without an explicit emphasis on clinical competencies, would bring about optimal educational outcomes.</p><p><strong>Methods: </strong>This is an explanatory sequential mixed-methods research. We enrolled a convenience sample of 41 eight-year programme medical students in Sun Yat-sen University who received alternating placements during clerkship. The effects of competence-based approach (<i>n</i> = 21) versus traditional approach (<i>n</i> = 20) to clerkship teaching were compared. In the quantitative phase, course satisfaction was measured <i>via</i> an online survey and academic performance was determined through final scores on summative assessment. Then, in the qualitative phase, students were invited for semi-structured interviews about their learning experiences, and the transcripts were used for thematic analysis.</p><p><strong>Results: </strong>Quantitative findings showed that students in the study group rated high course satisfaction and performed significantly better in their final scores compared with those in the control group. Qualitative findings from thematic analysis showed that students were relatively neutral about their preference on placement models, but clearly perceived, capitalised, and appreciated that their competencies were being cultivated by an instructor who was regarded as a positive role model.</p><p><strong>Conclusion: </strong>A competence-based approach to clerkship teaching resulted in better course satisfaction and academic performance, and was perceived, capitalised, and appreciated by students.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"541-549"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141180133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging evaluation of quality on medical education research with ChatGPT. 利用 ChatGPT 对医学教育研究质量进行评估。
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-08-04 DOI: 10.1080/0142159X.2024.2385678
Javier Alejandro Flores-Cohaila, Peter Garcia-Portocarrero, Deysi A Saldaña-Amaya, Brayan Miranda-Chavez, Cesar Copaja-Corzo

What is the educational challenge? The Medical Education Research Study Quality Instrument (MERSQI) is widely used to evaluate the quality of quantitative research in medical education. It has strong evidence of validity and is endorsed by guidelines. However, the manual appraisal process is time-consuming and resource-intensive, highlighting the need for more efficient methods. What are the proposed solutions? We propose to use ChatGPT to evaluate the quality of medical education research with the MERSQI and compare its scoring with those of human evaluators. What are the potential benefits to a broader global audience? Using ChatGPT to evaluate medical education research with the MERSQI can decrease the resources required for quality appraisal. This allows faster summaries of evidence, reducing the workload of researchers, editors, and educators. Furthermore, ChatGPTs' capability to extract supporting excerpts provides transparency and may have the potential for data extraction and training new medical education researchers. What are the next steps? We plan to continue evaluating medical education research with ChatGPT using the MERSQI and other instruments to determine its feasibility in this realm. Moreover, we plan to investigate which types of studies ChatGPT performs best in.

什么是教育挑战?医学教育研究质量工具(MERSQI)被广泛用于评估医学教育定量研究的质量。它的有效性证据确凿,并得到了相关指南的认可。然而,人工评估过程耗时耗力,需要更高效的方法。建议的解决方案是什么?我们建议使用 ChatGPT,用 MERSQI 评估医学教育研究的质量,并将其评分与人类评估者的评分进行比较。对更广泛的全球受众有什么潜在好处?使用 ChatGPT 评估医学教育研究的 MERSQI 可以减少质量评估所需的资源。这样可以更快地总结证据,减轻研究人员、编辑和教育工作者的工作量。此外,ChatGPTs 能够提取支持性摘录,从而提高了透明度,并有可能用于数据提取和培训新的医学教育研究人员。下一步计划是什么?我们计划继续使用 MERSQI 和其他工具对 ChatGPT 的医学教育研究进行评估,以确定其在这一领域的可行性。此外,我们还计划调查 ChatGPT 在哪类研究中表现最佳。
{"title":"Leveraging evaluation of quality on medical education research with ChatGPT.","authors":"Javier Alejandro Flores-Cohaila, Peter Garcia-Portocarrero, Deysi A Saldaña-Amaya, Brayan Miranda-Chavez, Cesar Copaja-Corzo","doi":"10.1080/0142159X.2024.2385678","DOIUrl":"10.1080/0142159X.2024.2385678","url":null,"abstract":"<p><p><b>What is the educational challenge?</b> The Medical Education Research Study Quality Instrument (MERSQI) is widely used to evaluate the quality of quantitative research in medical education. It has strong evidence of validity and is endorsed by guidelines. However, the manual appraisal process is time-consuming and resource-intensive, highlighting the need for more efficient methods. <b>What are the proposed solutions?</b> We propose to use ChatGPT to evaluate the quality of medical education research with the MERSQI and compare its scoring with those of human evaluators. <b>What are the potential benefits to a broader global audience?</b> Using ChatGPT to evaluate medical education research with the MERSQI can decrease the resources required for quality appraisal. This allows faster summaries of evidence, reducing the workload of researchers, editors, and educators. Furthermore, ChatGPTs' capability to extract supporting excerpts provides transparency and may have the potential for data extraction and training new medical education researchers. <b>What are the next steps?</b> We plan to continue evaluating medical education research with ChatGPT using the MERSQI and other instruments to determine its feasibility in this realm. Moreover, we plan to investigate which types of studies ChatGPT performs best in.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"401-403"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can ChatGPT generate practice question explanations for medical students, a new faculty teaching tool? 作为一种新的教师教学工具,ChatGPT 能否为医学生生成练习题解析?
IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-03-01 Epub Date: 2024-06-20 DOI: 10.1080/0142159X.2024.2363486
Lilin Tong, Jennifer Wang, Srikar Rapaka, Priya S Garg

Introduction: Multiple-choice questions (MCQs) are frequently used for formative assessment in medical school but often lack sufficient answer explanations given time-restraints of faculty. Chat Generated Pre-trained Transformer (ChatGPT) has emerged as a potential student learning aid and faculty teaching tool. This study aims to evaluate ChatGPT's performance in answering and providing explanations for MCQs.

Method: Ninety-four faculty-generated MCQs were collected from the pre-clerkship curriculum at a US medical school. ChatGPT's accuracy in answering MCQ's were tracked on first attempt without an answer prompt (Pass 1) and after being given a prompt for the correct answer (Pass 2). Explanations provided by ChatGPT were compared with faculty-generated explanations, and a 3-point evaluation scale was used to assess accuracy and thoroughness compared to faculty-generated answers.

Results: On first attempt, ChatGPT demonstrated a 75% accuracy in correctly answering faculty-generated MCQs. Among correctly answered questions, 66.4% of ChatGPT's explanations matched faculty explanations, and 89.1% captured some key aspects without providing inaccurate information. The amount of inaccurately generated explanations increases significantly if the questions was not answered correctly on the first pass (2.7% if correct on first pass vs. 34.6% if incorrect on first pass, p < 0.001).

Conclusion: ChatGPT shows promise in assisting faculty and students with explanations for practice MCQ's but should be used with caution. Faculty should review explanations and supplement to ensure coverage of learning objectives. Students can benefit from ChatGPT for immediate feedback through explanations if ChatGPT answers the question correctly on the first try. If the question is answered incorrectly students should remain cautious of the explanation and seek clarification from instructors.

导言:多选题(MCQ)经常被用于医学院的形成性评估,但由于教师时间有限,往往缺乏足够的答案解释。聊天生成的预训练转换器(ChatGPT)已成为一种潜在的学生学习辅助工具和教师教学工具。本研究旨在评估 ChatGPT 在回答 MCQs 和提供解释方面的性能:方法:从美国一所医学院的实习前课程中收集了 94 道由教师生成的 MCQ。在没有答案提示的情况下(通过 1),以及在有正确答案提示的情况下(通过 2),跟踪 ChatGPT 回答 MCQ 的准确性。将 ChatGPT 提供的解释与教师提供的解释进行比较,并使用 3 点评价量表来评估与教师提供的答案相比的准确性和全面性:首次尝试时,ChatGPT 对教师生成的 MCQ 的正确回答率为 75%。在正确回答的问题中,66.4% 的 ChatGPT 解释与教师的解释相吻合,89.1% 抓住了一些关键方面,没有提供不准确的信息。如果第一次回答问题不正确,则生成的解释不准确的数量会显著增加(第一次回答正确的为 2.7%,第一次回答不正确的为 34.6%,p):ChatGPT 在帮助教师和学生解释练习 MCQ 方面显示了前景,但应谨慎使用。教师应审阅解释并进行补充,以确保覆盖学习目标。如果 ChatGPT 第一次就能正确回答问题,学生可以从 ChatGPT 的解释中获得即时反馈。如果问题回答错误,学生应谨慎对待解释,并向教师寻求澄清。
{"title":"Can ChatGPT generate practice question explanations for medical students, a new faculty teaching tool?","authors":"Lilin Tong, Jennifer Wang, Srikar Rapaka, Priya S Garg","doi":"10.1080/0142159X.2024.2363486","DOIUrl":"10.1080/0142159X.2024.2363486","url":null,"abstract":"<p><strong>Introduction: </strong>Multiple-choice questions (MCQs) are frequently used for formative assessment in medical school but often lack sufficient answer explanations given time-restraints of faculty. Chat Generated Pre-trained Transformer (ChatGPT) has emerged as a potential student learning aid and faculty teaching tool. This study aims to evaluate ChatGPT's performance in answering and providing explanations for MCQs.</p><p><strong>Method: </strong>Ninety-four faculty-generated MCQs were collected from the pre-clerkship curriculum at a US medical school. ChatGPT's accuracy in answering MCQ's were tracked on first attempt without an answer prompt (Pass 1) and after being given a prompt for the correct answer (Pass 2). Explanations provided by ChatGPT were compared with faculty-generated explanations, and a 3-point evaluation scale was used to assess accuracy and thoroughness compared to faculty-generated answers.</p><p><strong>Results: </strong>On first attempt, ChatGPT demonstrated a 75% accuracy in correctly answering faculty-generated MCQs. Among correctly answered questions, 66.4% of ChatGPT's explanations matched faculty explanations, and 89.1% captured some key aspects without providing inaccurate information. The amount of inaccurately generated explanations increases significantly if the questions was not answered correctly on the first pass (2.7% if correct on first pass vs. 34.6% if incorrect on first pass, <i>p</i> < 0.001).</p><p><strong>Conclusion: </strong>ChatGPT shows promise in assisting faculty and students with explanations for practice MCQ's but should be used with caution. Faculty should review explanations and supplement to ensure coverage of learning objectives. Students can benefit from ChatGPT for immediate feedback through explanations if ChatGPT answers the question correctly on the first try. If the question is answered incorrectly students should remain cautious of the explanation and seek clarification from instructors.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"560-564"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141432268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical Teacher
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1