首页 > 最新文献

Canadian medical education journal最新文献

英文 中文
OSCEai dermatology: augmenting dermatologic medical education with Large Language Model GPT-4. OSCEai皮肤病学:用大语言模型GPT-4增强皮肤医学教育。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.80056
Ye-Jean Park, Eddie Guo, Muskaan Sachdeva, Bryan Ma, Sara Mirali, Brian Rankin, Nikki Nathanielsz, Abrahim Abduelmula, Tatiana Lapa, Mehul Gupta, Trevor Champagne

OSCEai Dermatology demonstrates how large language models (LLMs) like GPT-4 can be integrated into medical education to enhance trainees' history taking and management skills in an OSCE-like format, including in visual-based specialties like dermatology. By generating diverse, realistic skin cancer role-play scenarios across different skin tones alongside the integration of pre-existing, evidence-based images, the app provides learners with valuable, personalized feedback. This innovation offers a novel, interactive learning tool that supplements traditional teaching methods and can be applied across various specialties. Institutions can adopt or adapt similar LLM-driven educational tools to introduce trainees to a wider range of clinical cases, fostering improved diagnostic skills and patient-centred, culturally sensitive care.

OSCEai皮肤病学演示了如何将GPT-4这样的大型语言模型(llm)整合到医学教育中,以类似osce的格式增强学员的历史学习和管理技能,包括皮肤病学等基于视觉的专业。通过在不同肤色中生成多样化、逼真的皮肤癌角色扮演场景,并整合已有的、基于证据的图像,该应用程序为学习者提供了有价值的、个性化的反馈。这种创新提供了一种新的、互动的学习工具,补充了传统的教学方法,可以应用于各种专业。机构可以采用或调整类似的法学硕士驱动的教育工具,向学员介绍更广泛的临床病例,培养改进的诊断技能和以患者为中心的文化敏感护理。
{"title":"OSCEai dermatology: augmenting dermatologic medical education with Large Language Model GPT-4.","authors":"Ye-Jean Park, Eddie Guo, Muskaan Sachdeva, Bryan Ma, Sara Mirali, Brian Rankin, Nikki Nathanielsz, Abrahim Abduelmula, Tatiana Lapa, Mehul Gupta, Trevor Champagne","doi":"10.36834/cmej.80056","DOIUrl":"10.36834/cmej.80056","url":null,"abstract":"<p><p>OSCEai Dermatology demonstrates how large language models (LLMs) like GPT-4 can be integrated into medical education to enhance trainees' history taking and management skills in an OSCE-like format, including in visual-based specialties like dermatology. By generating diverse, realistic skin cancer role-play scenarios across different skin tones alongside the integration of pre-existing, evidence-based images, the app provides learners with valuable, personalized feedback. This innovation offers a novel, interactive learning tool that supplements traditional teaching methods and can be applied across various specialties. Institutions can adopt or adapt similar LLM-driven educational tools to introduce trainees to a wider range of clinical cases, fostering improved diagnostic skills and patient-centred, culturally sensitive care.</p>","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"29-31"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenAI's Sora in medical education: artificial videos in the classroom of the future Sora d'OpenAI dans l'enseignement médical : des vidéos artificielles dans la salle de classe du futur. OpenAI的Sora in medical education: artificial videos in the classroom of the future . OpenAI的Sora in medical education: artificial videos in the classroom of the future。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.79065
Ethan Waisberg, Joshua Ong, Rahul Kumar, Mouayad Masalkhi, Andrew G Lee
{"title":"OpenAI's Sora in medical education: artificial videos in the classroom of the future Sora d'OpenAI dans l'enseignement médical : des vidéos artificielles dans la salle de classe du futur.","authors":"Ethan Waisberg, Joshua Ong, Rahul Kumar, Mouayad Masalkhi, Andrew G Lee","doi":"10.36834/cmej.79065","DOIUrl":"10.36834/cmej.79065","url":null,"abstract":"","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"41-43"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re-evaluating the role of personal statements in pediatric residency admissions in the era of artificial intelligence: comparing faculty ratings of human and AI-generated statements. 重新评估个人陈述在人工智能时代儿科住院医师录取中的作用:比较教师对人工陈述和人工智能生成陈述的评分。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.81345
Brittany Curry, Amrit Kirpalani, Mia Remington, Tamara Van Hooren, Ye Shen, Erin R Peebles

Background: Personal statements play a large role in pediatric residency applications, providing insights into candidates' motivations, experiences, and fit for the program. With large language models (LLMs) such as Chat Generative Pre-trained Transformer (ChatGPT), concerns have arisen regarding how this may influence the authenticity of statements in evaluating candidates. This study investigates the efficacy and perceived authenticity of LLM-generated personal statements compared to human-generated statements in residency applications.

Methods: We conducted a blinded study comparing 30 ChatGPT-generated personal statements with 30 human-written statements. Four pediatric faculty raters assessed each statement using a standardized 10-point rubric. We analyzed the data using linear mixed-effects models, a chi-square sensitivity analysis, an evaluation of rater accuracy in identifying statement origin as well as consistency of scores amongst raters using intraclass correlation coefficients (ICC).

Results: There was no significant difference in mean scores between AI and human-written statements. Raters could only identify the source of a letter (AI or human) with 59% accuracy. There was considerable disagreement in scores between raters as indicated by negative ICCs.

Conclusions: AI-generated statements were rated similarly to human-authored statements and were indistinguishable by reviewers, highlighting the sophistication of these LLM models and the challenge in detecting their use. Furthermore, scores varied substantially between reviewers. As AI becomes increasingly used in application processes, it is imperative to examine its implications in the overall evaluation of applicants.

背景:个人陈述在儿科住院医师申请中起着很大的作用,提供了对候选人动机,经验和适合项目的见解。对于大型语言模型(llm),如聊天生成预训练转换器(ChatGPT),人们开始关注这可能如何影响评估候选人时陈述的真实性。本研究探讨了法学硕士生成的个人陈述与人工生成的陈述在居留申请中的有效性和感知真实性。方法:我们进行了一项盲法研究,比较了30个chatgpt生成的个人陈述和30个人工书写的陈述。四名儿科教师评分员使用标准化的10分制对每个陈述进行评估。我们使用线性混合效应模型、卡方敏感性分析、评估评分者识别语句来源的准确性,以及使用类内相关系数(ICC)评估评分者评分的一致性来分析数据。结果:人工智能与人类书面陈述的平均得分无显著差异。评分员只能以59%的准确率识别信件的来源(人工智能或人类)。评分者之间的分数有相当大的分歧,如负icc所示。结论:人工智能生成的语句与人类撰写的语句的评级相似,并且审稿人无法区分,突出了这些法学硕士模型的复杂性以及检测其使用的挑战。此外,评论者之间的分数差异很大。随着人工智能在申请过程中的应用越来越多,有必要研究它对申请人整体评估的影响。
{"title":"Re-evaluating the role of personal statements in pediatric residency admissions in the era of artificial intelligence: comparing faculty ratings of human and AI-generated statements.","authors":"Brittany Curry, Amrit Kirpalani, Mia Remington, Tamara Van Hooren, Ye Shen, Erin R Peebles","doi":"10.36834/cmej.81345","DOIUrl":"10.36834/cmej.81345","url":null,"abstract":"<p><strong>Background: </strong>Personal statements play a large role in pediatric residency applications, providing insights into candidates' motivations, experiences, and fit for the program. With large language models (LLMs) such as Chat Generative Pre-trained Transformer (ChatGPT), concerns have arisen regarding how this may influence the authenticity of statements in evaluating candidates. This study investigates the efficacy and perceived authenticity of LLM-generated personal statements compared to human-generated statements in residency applications.</p><p><strong>Methods: </strong>We conducted a blinded study comparing 30 ChatGPT-generated personal statements with 30 human-written statements. Four pediatric faculty raters assessed each statement using a standardized 10-point rubric. We analyzed the data using linear mixed-effects models, a chi-square sensitivity analysis, an evaluation of rater accuracy in identifying statement origin as well as consistency of scores amongst raters using intraclass correlation coefficients (ICC).</p><p><strong>Results: </strong>There was no significant difference in mean scores between AI and human-written statements. Raters could only identify the source of a letter (AI or human) with 59% accuracy. There was considerable disagreement in scores between raters as indicated by negative ICCs.</p><p><strong>Conclusions: </strong>AI-generated statements were rated similarly to human-authored statements and were indistinguishable by reviewers, highlighting the sophistication of these LLM models and the challenge in detecting their use. Furthermore, scores varied substantially between reviewers. As AI becomes increasingly used in application processes, it is imperative to examine its implications in the overall evaluation of applicants.</p>","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"21-24"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing equity in medical school interview prep with ChatGPT. 利用ChatGPT提高医学院面试准备的公平性。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.82035
Kyra Yuan Li, William Tien Tran, Farah Hashemi-Sabet, Fok-Han Leung
{"title":"Enhancing equity in medical school interview prep with ChatGPT.","authors":"Kyra Yuan Li, William Tien Tran, Farah Hashemi-Sabet, Fok-Han Leung","doi":"10.36834/cmej.82035","DOIUrl":"10.36834/cmej.82035","url":null,"abstract":"","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"52-53"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826817/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OSCEai: personalized interactive learning for undergraduate medical education. 面向本科医学教育的个性化互动学习。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.79220
Eddie Guo, Rashi Ramchandani, Ye-Jean Park, Mehul Gupta

Background: This study aims to evaluate the effectiveness of the OSCEai, a large language model-based platform that simulates clinical encounters, in enhancing undergraduate medical education.

Methods: A web-based application, OSCEai, was developed to bridge theoretical and practical learning. Following use, medical students from the University of Calgary Class of 2026 completed an anonymized survey on the usability, utility, and overall experience of OSCEai.

Results: A total of 37 respondents answered the anonymized survey. The OSCEai platform was highly valued for its ability to provide data on demand (33/37), support self-paced learning (30/37), and offer realistic patient interactions (29/37). The ease of use and medical content quality were rated at 4.73 (95% CI: 4.58 to 4.88) and 4.70 (95% CI: 4.55 to 4.86) out of 5, respectively. Some participants (8/37) commented that few cases were not representative and needed clarification about app functionality. Despite these limitations, OSCEai was favorably compared to lecture-based teaching methods, with an overall reception rating of 4.62 (95% CI: 4.46 to 4.79) out of 5.

Interpretation: The OSCEai platform fills a gap in medical training through its scalable, interactive, and personalized design. The findings suggest that integrating technologies, like OSCEai, into medical curricula can enhance the quality and efficacy of medical education.

背景:本研究旨在评估模拟临床相遇的大型语言模型平台OSCEai在加强本科医学教育中的有效性。方法:开发了一个基于网络的应用程序,OSCEai,以连接理论和实践学习。在使用之后,来自卡尔加里大学2026届的医科学生完成了一项关于OSCEai的可用性、实用性和整体体验的匿名调查。结果:共有37名受访者回答了匿名调查。OSCEai平台因其按需提供数据(33/37)、支持自主学习(30/37)和提供真实的患者互动(29/37)的能力而受到高度评价。易用性和医疗内容质量的评分分别为4.73 (95% CI: 4.58至4.88)和4.70 (95% CI: 4.55至4.86)。一些参与者(8/37)评论说,很少有案例不具有代表性,需要澄清应用程序的功能。尽管存在这些局限性,但与基于讲座的教学方法相比,OSCEai的总体接受率为4.62 (95% CI: 4.46至4.79)(满分5分)。解释:OSCEai平台通过其可扩展、互动和个性化的设计填补了医疗培训的空白。研究结果表明,将OSCEai等技术纳入医学课程可以提高医学教育的质量和效果。
{"title":"OSCEai: personalized interactive learning for undergraduate medical education.","authors":"Eddie Guo, Rashi Ramchandani, Ye-Jean Park, Mehul Gupta","doi":"10.36834/cmej.79220","DOIUrl":"10.36834/cmej.79220","url":null,"abstract":"<p><strong>Background: </strong>This study aims to evaluate the effectiveness of the OSCEai, a large language model-based platform that simulates clinical encounters, in enhancing undergraduate medical education.</p><p><strong>Methods: </strong>A web-based application, OSCEai, was developed to bridge theoretical and practical learning. Following use, medical students from the University of Calgary Class of 2026 completed an anonymized survey on the usability, utility, and overall experience of OSCEai.</p><p><strong>Results: </strong>A total of 37 respondents answered the anonymized survey. The OSCEai platform was highly valued for its ability to provide data on demand (33/37), support self-paced learning (30/37), and offer realistic patient interactions (29/37). The ease of use and medical content quality were rated at 4.73 (95% CI: 4.58 to 4.88) and 4.70 (95% CI: 4.55 to 4.86) out of 5, respectively. Some participants (8/37) commented that few cases were not representative and needed clarification about app functionality. Despite these limitations, OSCEai was favorably compared to lecture-based teaching methods, with an overall reception rating of 4.62 (95% CI: 4.46 to 4.79) out of 5.</p><p><strong>Interpretation: </strong>The OSCEai platform fills a gap in medical training through its scalable, interactive, and personalized design. The findings suggest that integrating technologies, like OSCEai, into medical curricula can enhance the quality and efficacy of medical education.</p>","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"7-14"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826823/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging global dermatology education gaps: the promise and challenges of leveraging AI-driven medical training to advance equity and personalization with OSCEai dermatology. 弥合全球皮肤病学教育差距:利用人工智能驱动的医疗培训促进oseai皮肤病学公平性和个性化的前景和挑战。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.82571
Sophia Ma, Tarek Zieneldien, Farah Succaria
{"title":"Bridging global dermatology education gaps: the promise and challenges of leveraging AI-driven medical training to advance equity and personalization with OSCEai dermatology.","authors":"Sophia Ma, Tarek Zieneldien, Farah Succaria","doi":"10.36834/cmej.82571","DOIUrl":"10.36834/cmej.82571","url":null,"abstract":"","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"60-61"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence can transform formative assessment in medical education. 人工智能可以改变医学教育中的形成性评估。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.79759
Joshua Feldman, Christopher Gilchrist, Fok-Han Leung
{"title":"Artificial Intelligence can transform formative assessment in medical education.","authors":"Joshua Feldman, Christopher Gilchrist, Fok-Han Leung","doi":"10.36834/cmej.79759","DOIUrl":"10.36834/cmej.79759","url":null,"abstract":"","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"39-40"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the threat of AI to undergraduate medical school admissions: a study of its potential impact on the rating of applicant essays. 调查人工智能对本科医学院招生的威胁:一项研究其对申请人论文评级的潜在影响。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.79690
Joshua Choi, Jenny Zhao, Thuy-Anh Ngo, Lawrence Grierson

Background: Medical school applications often require short written essays or personal statements, which are purportedly used to assess professional qualities related to the practice of medicine. With generative artificial intelligence (AI) tools capable of supplementing or replacing inputs by human applicants, concerns about how these tools impact written assessments are growing. This study explores how AI influences the ratings of essays used for medical school admissions.

Methods: A within-subject experimental design was employed. Eight participants (academic clinicians, faculty researchers, medical students, and a community member) rated essays written by 24 undergraduate students and recent graduates from McMaster University. The students were divided into four groups: medical school aspirants with AI assistance (ASP-AI), aspirants without AI assistance (ASP), non-aspirants with AI assistance (NASP-AI), and essays generated solely by ChatGPT 3.5 (AI-ONLY). Participants were provided training in the application of single Likert scale tool before rating. Differences in ratings by writer group were determined via one-way between group ANOVA.

Results: Analyses revealed no statistically significant differences in ratings across the four writer groups (p = .358). The intraclass correlation coefficient was .147.

Conclusion: The proliferation of AI adds to prevailing questions about the value personal statements and essays have in supporting applicant selection. We speculate that these assessments hold less value than ever in providing authentic insight into applicant attributes. In this context, we suggest that medical schools move away from the use of essays in their admissions processes.

背景:医学院申请通常需要简短的书面论文或个人陈述,据称是用来评估与医学实践相关的专业素质。随着生成式人工智能(AI)工具能够补充或取代人类申请人的输入,人们越来越关注这些工具如何影响书面评估。这项研究探讨了人工智能如何影响医学院录取论文的评分。方法:采用受试者内实验设计。8名参与者(学术临床医生、教员研究人员、医学院学生和一名社区成员)对来自麦克马斯特大学的24名本科生和应届毕业生所写的论文进行了评分。学生被分为四组:有人工智能辅助的医学院有志者(ASP-AI)、没有人工智能辅助的有志者(ASP)、有人工智能辅助的非有志者(NASP-AI)和仅由ChatGPT 3.5生成的论文(AI- only)。在评分前,对参与者进行了使用单一李克特量表工具的培训。作者组评分差异通过组间单因素方差分析确定。结果:分析显示四组作者在评分上无统计学差异(p = .358)。类内相关系数为0.147。结论:人工智能的扩散增加了关于个人陈述和论文在支持申请人选择方面的价值的普遍问题。我们推测,这些评估在提供对申请人属性的真实洞察方面的价值比以往任何时候都要低。在这种情况下,我们建议医学院在招生过程中不要使用论文。
{"title":"Investigating the threat of AI to undergraduate medical school admissions: a study of its potential impact on the rating of applicant essays.","authors":"Joshua Choi, Jenny Zhao, Thuy-Anh Ngo, Lawrence Grierson","doi":"10.36834/cmej.79690","DOIUrl":"10.36834/cmej.79690","url":null,"abstract":"<p><strong>Background: </strong>Medical school applications often require short written essays or personal statements, which are purportedly used to assess professional qualities related to the practice of medicine. With generative artificial intelligence (AI) tools capable of supplementing or replacing inputs by human applicants, concerns about how these tools impact written assessments are growing. This study explores how AI influences the ratings of essays used for medical school admissions.</p><p><strong>Methods: </strong>A within-subject experimental design was employed. Eight participants (academic clinicians, faculty researchers, medical students, and a community member) rated essays written by 24 undergraduate students and recent graduates from McMaster University. The students were divided into four groups: medical school aspirants with AI assistance (ASP-AI), aspirants without AI assistance (ASP), non-aspirants with AI assistance (NASP-AI), and essays generated solely by ChatGPT 3.5 (AI-ONLY). Participants were provided training in the application of single Likert scale tool before rating. Differences in ratings by writer group were determined via one-way between group ANOVA.</p><p><strong>Results: </strong>Analyses revealed no statistically significant differences in ratings across the four writer groups (<i>p</i> = .358). The intraclass correlation coefficient was .147.</p><p><strong>Conclusion: </strong>The proliferation of AI adds to prevailing questions about the value personal statements and essays have in supporting applicant selection. We speculate that these assessments hold less value than ever in providing authentic insight into applicant attributes. In this context, we suggest that medical schools move away from the use of essays in their admissions processes.</p>","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"15-20"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826818/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Between stethoscope and algorithm: is Canadian medical education ready for AI-enabled care? 在听诊器和算法之间:加拿大医学教育为人工智能护理做好了准备吗?
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.82463
Amal Khan
{"title":"Between stethoscope and algorithm: is Canadian medical education ready for AI-enabled care?","authors":"Amal Khan","doi":"10.36834/cmej.82463","DOIUrl":"10.36834/cmej.82463","url":null,"abstract":"","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"50-51"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI literacy starts at the bedside: a call for grassroots and reflective approaches in medical education. 人工智能知识从床边开始:呼吁在医学教育中采取基层和反思的方法。
Pub Date : 2025-12-22 eCollection Date: 2025-12-01 DOI: 10.36834/cmej.81537
Austin Solak, Urmi Sheth, Ragav Chona, Allie Jones, Nikoo Aghaei, Ricky Hu
{"title":"AI literacy starts at the bedside: a call for grassroots and reflective approaches in medical education.","authors":"Austin Solak, Urmi Sheth, Ragav Chona, Allie Jones, Nikoo Aghaei, Ricky Hu","doi":"10.36834/cmej.81537","DOIUrl":"10.36834/cmej.81537","url":null,"abstract":"","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":"16 6","pages":"46-47"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12826813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Canadian medical education journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1