Vasudha L Bhavaraju, Sarada Panchanathan, Brigham C Willis, Pamela Garcia-Filion
Background: Competence-based medical education requires robust data to link competence with clinical experiences. The SARS-CoV-2 (COVID-19) pandemic abruptly altered the standard trajectory of clinical exposure in medical training programs. Residency program directors were tasked with identifying and addressing the resultant gaps in each trainee's experiences using existing tools.
Objective: This study aims to demonstrate a feasible and efficient method to capture electronic health record (EHR) data that measure the volume and variety of pediatric resident clinical experiences from a continuity clinic; generate individual-, class-, and graduate-level benchmark data; and create a visualization for learners to quickly identify gaps in clinical experiences.
Methods: This pilot was conducted in a large, urban pediatric residency program from 2016 to 2022. Through consensus, 5 pediatric faculty identified diagnostic groups that pediatric residents should see to be competent in outpatient pediatrics. Information technology consultants used International Classification of Diseases, Tenth Revision (ICD-10) codes corresponding with each diagnostic group to extract EHR patient encounter data as an indicator of exposure to the specific diagnosis. The frequency (volume) and diagnosis types (variety) seen by active residents (classes of 2020-2022) were compared with class and graduated resident (classes of 2016-2019) averages. These data were converted to percentages and translated to a radar chart visualization for residents to quickly compare their current clinical experiences with peers and graduates. Residents were surveyed on the use of these data and the visualization to identify training gaps.
Results: Patient encounter data about clinical experiences for 102 residents (N=52 graduates) were extracted. Active residents (n=50) received data reports with radar graphs biannually: 3 for the classes of 2020 and 2021 and 2 for the class of 2022. Radar charts distinctly demonstrated gaps in diagnoses exposure compared with classmates and graduates. Residents found the visualization useful in setting clinical and learning goals.
Conclusions: This pilot describes an innovative method of capturing and presenting data about resident clinical experiences, compared with peer and graduate benchmarks, to identify learning gaps that may result from disruptions or modifications in medical training. This methodology can be aggregated across specialties and institutions and potentially inform competence-based medical education.
{"title":"Leveraging the Electronic Health Record to Measure Resident Clinical Experiences and Identify Training Gaps: Development and Usability Study.","authors":"Vasudha L Bhavaraju, Sarada Panchanathan, Brigham C Willis, Pamela Garcia-Filion","doi":"10.2196/53337","DOIUrl":"https://doi.org/10.2196/53337","url":null,"abstract":"<p><strong>Background: </strong>Competence-based medical education requires robust data to link competence with clinical experiences. The SARS-CoV-2 (COVID-19) pandemic abruptly altered the standard trajectory of clinical exposure in medical training programs. Residency program directors were tasked with identifying and addressing the resultant gaps in each trainee's experiences using existing tools.</p><p><strong>Objective: </strong>This study aims to demonstrate a feasible and efficient method to capture electronic health record (EHR) data that measure the volume and variety of pediatric resident clinical experiences from a continuity clinic; generate individual-, class-, and graduate-level benchmark data; and create a visualization for learners to quickly identify gaps in clinical experiences.</p><p><strong>Methods: </strong>This pilot was conducted in a large, urban pediatric residency program from 2016 to 2022. Through consensus, 5 pediatric faculty identified diagnostic groups that pediatric residents should see to be competent in outpatient pediatrics. Information technology consultants used International Classification of Diseases, Tenth Revision (ICD-10) codes corresponding with each diagnostic group to extract EHR patient encounter data as an indicator of exposure to the specific diagnosis. The frequency (volume) and diagnosis types (variety) seen by active residents (classes of 2020-2022) were compared with class and graduated resident (classes of 2016-2019) averages. These data were converted to percentages and translated to a radar chart visualization for residents to quickly compare their current clinical experiences with peers and graduates. Residents were surveyed on the use of these data and the visualization to identify training gaps.</p><p><strong>Results: </strong>Patient encounter data about clinical experiences for 102 residents (N=52 graduates) were extracted. Active residents (n=50) received data reports with radar graphs biannually: 3 for the classes of 2020 and 2021 and 2 for the class of 2022. Radar charts distinctly demonstrated gaps in diagnoses exposure compared with classmates and graduates. Residents found the visualization useful in setting clinical and learning goals.</p><p><strong>Conclusions: </strong>This pilot describes an innovative method of capturing and presenting data about resident clinical experiences, compared with peer and graduate benchmarks, to identify learning gaps that may result from disruptions or modifications in medical training. This methodology can be aggregated across specialties and institutions and potentially inform competence-based medical education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brenton T Bicknell, Danner Butler, Sydney Whalen, James Ricks, Cory J Dixon, Abigail B Clark, Olivia Spaedy, Adam Skelton, Neel Edupuganti, Lance Dzubinski, Hudson Tate, Garrett Dyess, Brenessa Lindeman, Lisa Soleymani Lehmann
Background: Recent studies, including those by the National Board of Medical Examiners, have highlighted the remarkable capabilities of recent large language models (LLMs) such as ChatGPT in passing the United States Medical Licensing Examination (USMLE). However, there is a gap in detailed analysis of LLM performance in specific medical content areas, thus limiting an assessment of their potential utility in medical education.
Objective: This study aimed to assess and compare the accuracy of successive ChatGPT versions (GPT-3.5, GPT-4, and GPT-4 Omni) in USMLE disciplines, clinical clerkships, and the clinical skills of diagnostics and management.
Methods: This study used 750 clinical vignette-based multiple-choice questions to characterize the performance of successive ChatGPT versions (ChatGPT 3.5 [GPT-3.5], ChatGPT 4 [GPT-4], and ChatGPT 4 Omni [GPT-4o]) across USMLE disciplines, clinical clerkships, and in clinical skills (diagnostics and management). Accuracy was assessed using a standardized protocol, with statistical analyses conducted to compare the models' performances.
Results: GPT-4o achieved the highest accuracy across 750 multiple-choice questions at 90.4%, outperforming GPT-4 and GPT-3.5, which scored 81.1% and 60.0%, respectively. GPT-4o's highest performances were in social sciences (95.5%), behavioral and neuroscience (94.2%), and pharmacology (93.2%). In clinical skills, GPT-4o's diagnostic accuracy was 92.7% and management accuracy was 88.8%, significantly higher than its predecessors. Notably, both GPT-4o and GPT-4 significantly outperformed the medical student average accuracy of 59.3% (95% CI 58.3-60.3).
Conclusions: GPT-4o's performance in USMLE disciplines, clinical clerkships, and clinical skills indicates substantial improvements over its predecessors, suggesting significant potential for the use of this technology as an educational aid for medical students. These findings underscore the need for careful consideration when integrating LLMs into medical education, emphasizing the importance of structured curricula to guide their appropriate use and the need for ongoing critical analyses to ensure their reliability and effectiveness.
背景:最近的研究,包括美国国家医学考试委员会(National Board of Medical Examiners)的研究,都强调了最近的大型语言模型(LLM),如 ChatGPT,在通过美国医学执照考试(USMLE)方面的卓越能力。然而,在详细分析 LLM 在特定医学内容领域的表现方面还存在空白,从而限制了对其在医学教育中潜在用途的评估:本研究旨在评估和比较历代 ChatGPT 版本(GPT-3.5、GPT-4 和 GPT-4 Omni)在 USMLE 学科、临床实习以及诊断和管理临床技能方面的准确性:本研究使用了 750 道基于临床小故事的选择题,以描述历代 ChatGPT 版本(ChatGPT 3.5 [GPT-3.5]、ChatGPT 4 [GPT-4]和 ChatGPT 4 Omni [GPT-4o])在 USMLE 学科、临床实习和临床技能(诊断和管理)方面的表现。采用标准化方案评估准确性,并进行统计分析以比较模型的性能:结果:在750道选择题中,GPT-4o的准确率最高,达到90.4%,超过了分别为81.1%和60.0%的GPT-4和GPT-3.5。GPT-4o 在社会科学(95.5%)、行为与神经科学(94.2%)和药理学(93.2%)方面表现最佳。在临床技能方面,GPT-4o 的诊断准确率为 92.7%,管理准确率为 88.8%,明显高于其前身。值得注意的是,GPT-4o和GPT-4的准确率均明显高于医学生59.3%(95% CI 58.3-60.3)的平均准确率:结论:GPT-4o 在 USMLE 学科、临床实习和临床技能方面的表现比其前代产品有了大幅提高,这表明该技术作为医学生教育辅助工具的巨大潜力。这些发现强调了在将 LLMs 纳入医学教育时需要慎重考虑的问题,强调了结构化课程对指导其合理使用的重要性,以及持续进行关键分析以确保其可靠性和有效性的必要性。
{"title":"ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis.","authors":"Brenton T Bicknell, Danner Butler, Sydney Whalen, James Ricks, Cory J Dixon, Abigail B Clark, Olivia Spaedy, Adam Skelton, Neel Edupuganti, Lance Dzubinski, Hudson Tate, Garrett Dyess, Brenessa Lindeman, Lisa Soleymani Lehmann","doi":"10.2196/63430","DOIUrl":"https://doi.org/10.2196/63430","url":null,"abstract":"<p><strong>Background: </strong>Recent studies, including those by the National Board of Medical Examiners, have highlighted the remarkable capabilities of recent large language models (LLMs) such as ChatGPT in passing the United States Medical Licensing Examination (USMLE). However, there is a gap in detailed analysis of LLM performance in specific medical content areas, thus limiting an assessment of their potential utility in medical education.</p><p><strong>Objective: </strong>This study aimed to assess and compare the accuracy of successive ChatGPT versions (GPT-3.5, GPT-4, and GPT-4 Omni) in USMLE disciplines, clinical clerkships, and the clinical skills of diagnostics and management.</p><p><strong>Methods: </strong>This study used 750 clinical vignette-based multiple-choice questions to characterize the performance of successive ChatGPT versions (ChatGPT 3.5 [GPT-3.5], ChatGPT 4 [GPT-4], and ChatGPT 4 Omni [GPT-4o]) across USMLE disciplines, clinical clerkships, and in clinical skills (diagnostics and management). Accuracy was assessed using a standardized protocol, with statistical analyses conducted to compare the models' performances.</p><p><strong>Results: </strong>GPT-4o achieved the highest accuracy across 750 multiple-choice questions at 90.4%, outperforming GPT-4 and GPT-3.5, which scored 81.1% and 60.0%, respectively. GPT-4o's highest performances were in social sciences (95.5%), behavioral and neuroscience (94.2%), and pharmacology (93.2%). In clinical skills, GPT-4o's diagnostic accuracy was 92.7% and management accuracy was 88.8%, significantly higher than its predecessors. Notably, both GPT-4o and GPT-4 significantly outperformed the medical student average accuracy of 59.3% (95% CI 58.3-60.3).</p><p><strong>Conclusions: </strong>GPT-4o's performance in USMLE disciplines, clinical clerkships, and clinical skills indicates substantial improvements over its predecessors, suggesting significant potential for the use of this technology as an educational aid for medical students. These findings underscore the need for careful consideration when integrating LLMs into medical education, emphasizing the importance of structured curricula to guide their appropriate use and the need for ongoing critical analyses to ensure their reliability and effectiveness.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sauliha Rabia Alli, Soaad Qahhār Hossain, Sunit Das, Ross Upshur
Unlabelled: In the field of medicine, uncertainty is inherent. Physicians are asked to make decisions on a daily basis without complete certainty, whether it is in understanding the patient's problem, performing the physical examination, interpreting the findings of diagnostic tests, or proposing a management plan. The reasons for this uncertainty are widespread, including the lack of knowledge about the patient, individual physician limitations, and the limited predictive power of objective diagnostic tools. This uncertainty poses significant problems in providing competent patient care. Research efforts and teaching are attempts to reduce uncertainty that have now become inherent to medicine. Despite this, uncertainty is rampant. Artificial intelligence (AI) tools, which are being rapidly developed and integrated into practice, may change the way we navigate uncertainty. In their strongest forms, AI tools may have the ability to improve data collection on diseases, patient beliefs, values, and preferences, thereby allowing more time for physician-patient communication. By using methods not previously considered, these tools hold the potential to reduce the uncertainty in medicine, such as those arising due to the lack of clinical information and provider skill and bias. Despite this possibility, there has been considerable resistance to the implementation of AI tools in medical practice. In this viewpoint article, we discuss the impact of AI on medical uncertainty and discuss practical approaches to teaching the use of AI tools in medical schools and residency training programs, including AI ethics, real-world skills, and technological aptitude.
{"title":"The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education.","authors":"Sauliha Rabia Alli, Soaad Qahhār Hossain, Sunit Das, Ross Upshur","doi":"10.2196/51446","DOIUrl":"https://doi.org/10.2196/51446","url":null,"abstract":"<p><strong>Unlabelled: </strong>In the field of medicine, uncertainty is inherent. Physicians are asked to make decisions on a daily basis without complete certainty, whether it is in understanding the patient's problem, performing the physical examination, interpreting the findings of diagnostic tests, or proposing a management plan. The reasons for this uncertainty are widespread, including the lack of knowledge about the patient, individual physician limitations, and the limited predictive power of objective diagnostic tools. This uncertainty poses significant problems in providing competent patient care. Research efforts and teaching are attempts to reduce uncertainty that have now become inherent to medicine. Despite this, uncertainty is rampant. Artificial intelligence (AI) tools, which are being rapidly developed and integrated into practice, may change the way we navigate uncertainty. In their strongest forms, AI tools may have the ability to improve data collection on diseases, patient beliefs, values, and preferences, thereby allowing more time for physician-patient communication. By using methods not previously considered, these tools hold the potential to reduce the uncertainty in medicine, such as those arising due to the lack of clinical information and provider skill and bias. Despite this possibility, there has been considerable resistance to the implementation of AI tools in medical practice. In this viewpoint article, we discuss the impact of AI on medical uncertainty and discuss practical approaches to teaching the use of AI tools in medical schools and residency training programs, including AI ethics, real-world skills, and technological aptitude.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142575636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlabelled: Digital transformation has disrupted many industries but is yet to revolutionize health care. Educational programs must be aligned with the reality that goes beyond developing individuals in their own professions, professionals wishing to make an impact in digital health will need a multidisciplinary understanding of how business models, organizational processes, stakeholder relationships, and workforce dynamics across the health care ecosystem may be disrupted by digital health technology. This paper describes the redesign of an existing postgraduate program, ensuring that core digital health content is relevant, pedagogically sound, and evidence-based, and that the program provides learning and practical application of concepts of the digital transformation of health. Existing subjects were mapped to the American Medical Informatics Association Clinical Informatics Core Competencies, followed by consultation with leadership to further identify gaps or opportunities to revise the course structure. New additions of core and elective subjects were proposed to align with the competencies. Suitable electives were chosen based on stakeholder feedback and a review of subjects in fields relevant to digital transformation of health. The program was revised with a new title, course overview, course intended learning outcomes, reorganizing of core subjects, and approval of new electives, adding to a suite of professional development offerings and forming a structured pathway to further qualification. Programs in digital health must move beyond purely informatics-based competencies toward enabling transformational change. Postgraduate program development in this field is possible within a short time frame with the use of established competency frameworks and expert and student consultation.
{"title":"Transforming the Future of Digital Health Education: Redesign of a Graduate Program Using Competency Mapping.","authors":"Michelle Mun, Sonia Chanchlani, Kayley Lyons, Kathleen Gray","doi":"10.2196/54112","DOIUrl":"10.2196/54112","url":null,"abstract":"<p><strong>Unlabelled: </strong>Digital transformation has disrupted many industries but is yet to revolutionize health care. Educational programs must be aligned with the reality that goes beyond developing individuals in their own professions, professionals wishing to make an impact in digital health will need a multidisciplinary understanding of how business models, organizational processes, stakeholder relationships, and workforce dynamics across the health care ecosystem may be disrupted by digital health technology. This paper describes the redesign of an existing postgraduate program, ensuring that core digital health content is relevant, pedagogically sound, and evidence-based, and that the program provides learning and practical application of concepts of the digital transformation of health. Existing subjects were mapped to the American Medical Informatics Association Clinical Informatics Core Competencies, followed by consultation with leadership to further identify gaps or opportunities to revise the course structure. New additions of core and elective subjects were proposed to align with the competencies. Suitable electives were chosen based on stakeholder feedback and a review of subjects in fields relevant to digital transformation of health. The program was revised with a new title, course overview, course intended learning outcomes, reorganizing of core subjects, and approval of new electives, adding to a suite of professional development offerings and forming a structured pathway to further qualification. Programs in digital health must move beyond purely informatics-based competencies toward enabling transformational change. Postgraduate program development in this field is possible within a short time frame with the use of established competency frameworks and expert and student consultation.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mert Karabacak, Zeynep Ozcan, Burak Berksu Ozkara, Zeynep Sude Furkan, Sotirios Bisdas
Background: Undergraduate medical students often lack hands-on research experience and fundamental scientific research skills, limiting their exposure to the practical aspects of scientific investigation. The Cerrahpasa Neuroscience Society introduced a program to address this deficiency and facilitate student-led research.
Objective: The primary goal of this initiative was to enhance medical students' research output by enabling them to generate and publish peer-reviewed papers within the framework of this pilot project. The project aimed to provide an accessible, global model for research training through structured journal clubs, mentorship from experienced peers, and resource access.
Methods: In January 2022, a total of 30 volunteer students from various Turkish medical schools participated in this course-based undergraduate research experience program. Students self-organized into 2 groups according to their preferred study type: original research or systematic review. Two final-year students with prior research experience led the project, developing training modules using selected materials. The project was implemented entirely online, with participants completing training modules before using their newly acquired theoretical knowledge to perform assigned tasks.
Results: Based on student feedback, the project timeline was adjusted to allow for greater flexibility in meeting deadlines. Despite these adjustments, participants successfully completed their tasks, applying the theoretical knowledge they had gained to their respective assignments. As of April 2024, the initiative has culminated in 3 published papers and 3 more under peer review. The project has also seen an increase in student interest in further involvement and self-paced learning.
Conclusions: This initiative leverages globally accessible resources for research training, effectively fostering research competency among participants. It has successfully demonstrated the potential for undergraduates to contribute to medical research output and paved the way for a self-sustaining, student-led research program. Despite some logistical challenges, the project provided valuable insights for future implementations, showcasing the potential for students to engage in meaningful, publishable research.
{"title":"A Pilot Project to Promote Research Competency in Medical Students Through Journal Clubs: Mixed Methods Study.","authors":"Mert Karabacak, Zeynep Ozcan, Burak Berksu Ozkara, Zeynep Sude Furkan, Sotirios Bisdas","doi":"10.2196/51173","DOIUrl":"10.2196/51173","url":null,"abstract":"<p><strong>Background: </strong>Undergraduate medical students often lack hands-on research experience and fundamental scientific research skills, limiting their exposure to the practical aspects of scientific investigation. The Cerrahpasa Neuroscience Society introduced a program to address this deficiency and facilitate student-led research.</p><p><strong>Objective: </strong>The primary goal of this initiative was to enhance medical students' research output by enabling them to generate and publish peer-reviewed papers within the framework of this pilot project. The project aimed to provide an accessible, global model for research training through structured journal clubs, mentorship from experienced peers, and resource access.</p><p><strong>Methods: </strong>In January 2022, a total of 30 volunteer students from various Turkish medical schools participated in this course-based undergraduate research experience program. Students self-organized into 2 groups according to their preferred study type: original research or systematic review. Two final-year students with prior research experience led the project, developing training modules using selected materials. The project was implemented entirely online, with participants completing training modules before using their newly acquired theoretical knowledge to perform assigned tasks.</p><p><strong>Results: </strong>Based on student feedback, the project timeline was adjusted to allow for greater flexibility in meeting deadlines. Despite these adjustments, participants successfully completed their tasks, applying the theoretical knowledge they had gained to their respective assignments. As of April 2024, the initiative has culminated in 3 published papers and 3 more under peer review. The project has also seen an increase in student interest in further involvement and self-paced learning.</p><p><strong>Conclusions: </strong>This initiative leverages globally accessible resources for research training, effectively fostering research competency among participants. It has successfully demonstrated the potential for undergraduates to contribute to medical research output and paved the way for a self-sustaining, student-led research program. Despite some logistical challenges, the project provided valuable insights for future implementations, showcasing the potential for students to engage in meaningful, publishable research.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maiar Elhariry, Kashish Malhotra, Kashish Goyal, Marco Bardus, Punith Kempegowda
<p><strong>Background: </strong>Social media is a powerful platform for disseminating health information, yet it is often riddled with misinformation. Further, few guidelines exist for producing reliable, peer-reviewed content. This study describes a framework for creating and disseminating evidence-based videos on polycystic ovary syndrome (PCOS) and thyroid conditions to improve health literacy and tackle misinformation.</p><p><strong>Objective: </strong>The study aims to evaluate the creation, dissemination, and impact of evidence-based, peer-reviewed short videos on PCOS and thyroid disorders across social media. It also explores the experiences of content creators and assesses audience engagement.</p><p><strong>Methods: </strong>This mixed methods prospective study was conducted between December 2022 and May 2023 and comprised five phases: (1) script generation, (2) video creation, (3) cross-platform publication, (4) process evaluation, and (5) impact evaluation. The SIMBA-CoMICs (Simulation via Instant Messaging for Bedside Application-Combined Medical Information Cines) initiative provides a structured process where medical concepts are simplified and converted to visually engaging videos. The initiative recruited medical students interested in making visually appealing and scientifically accurate videos for social media. The students were then guided to create video scripts based on frequently searched PCOS- and thyroid-related topics. Once experts confirmed the accuracy of the scripts, the medical students produced the videos. The videos were checked by clinical experts and experts with lived experience to ensure clarity and engagement. The SIMBA-CoMICs team then guided the students in editing these videos to fit platform requirements before posting them on TikTok, Instagram, YouTube, and Twitter. Engagement metrics were tracked over 2 months. Content creators were interviewed, and thematic analysis was performed to explore their experiences.</p><p><strong>Results: </strong>The 20 videos received 718 likes, 120 shares, and 54,686 views across all platforms, with TikTok (19,458 views) and Twitter (19,678 views) being the most popular. Engagement increased significantly, with follower growth ranging from 5% on Twitter to 89% on TikTok. Thematic analysis of interviews with 8 out of 38 participants revealed 4 key themes: views on social media, advice for using social media, reasons for participating, and reflections on the project. Content creators highlighted the advantages of social media, such as large outreach (12 references), convenience (10 references), and accessibility to opportunities (7 references). Participants appreciated the nonrestrictive participation criteria, convenience (8 references), and the ability to record from home using prewritten scripts (6 references). Further recommendations to improve the content creation experience included awareness of audience demographics (9 references), sharing content on multiple platforms
{"title":"A SIMBA CoMICs Initiative to Cocreating and Disseminating Evidence-Based, Peer-Reviewed Short Videos on Social Media: Mixed Methods Prospective Study.","authors":"Maiar Elhariry, Kashish Malhotra, Kashish Goyal, Marco Bardus, Punith Kempegowda","doi":"10.2196/52924","DOIUrl":"https://doi.org/10.2196/52924","url":null,"abstract":"<p><strong>Background: </strong>Social media is a powerful platform for disseminating health information, yet it is often riddled with misinformation. Further, few guidelines exist for producing reliable, peer-reviewed content. This study describes a framework for creating and disseminating evidence-based videos on polycystic ovary syndrome (PCOS) and thyroid conditions to improve health literacy and tackle misinformation.</p><p><strong>Objective: </strong>The study aims to evaluate the creation, dissemination, and impact of evidence-based, peer-reviewed short videos on PCOS and thyroid disorders across social media. It also explores the experiences of content creators and assesses audience engagement.</p><p><strong>Methods: </strong>This mixed methods prospective study was conducted between December 2022 and May 2023 and comprised five phases: (1) script generation, (2) video creation, (3) cross-platform publication, (4) process evaluation, and (5) impact evaluation. The SIMBA-CoMICs (Simulation via Instant Messaging for Bedside Application-Combined Medical Information Cines) initiative provides a structured process where medical concepts are simplified and converted to visually engaging videos. The initiative recruited medical students interested in making visually appealing and scientifically accurate videos for social media. The students were then guided to create video scripts based on frequently searched PCOS- and thyroid-related topics. Once experts confirmed the accuracy of the scripts, the medical students produced the videos. The videos were checked by clinical experts and experts with lived experience to ensure clarity and engagement. The SIMBA-CoMICs team then guided the students in editing these videos to fit platform requirements before posting them on TikTok, Instagram, YouTube, and Twitter. Engagement metrics were tracked over 2 months. Content creators were interviewed, and thematic analysis was performed to explore their experiences.</p><p><strong>Results: </strong>The 20 videos received 718 likes, 120 shares, and 54,686 views across all platforms, with TikTok (19,458 views) and Twitter (19,678 views) being the most popular. Engagement increased significantly, with follower growth ranging from 5% on Twitter to 89% on TikTok. Thematic analysis of interviews with 8 out of 38 participants revealed 4 key themes: views on social media, advice for using social media, reasons for participating, and reflections on the project. Content creators highlighted the advantages of social media, such as large outreach (12 references), convenience (10 references), and accessibility to opportunities (7 references). Participants appreciated the nonrestrictive participation criteria, convenience (8 references), and the ability to record from home using prewritten scripts (6 references). Further recommendations to improve the content creation experience included awareness of audience demographics (9 references), sharing content on multiple platforms ","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Artificial intelligence (AI) chatbots are poised to have a profound impact on medical education. Medical students, as early adopters of technology and future health care providers, play a crucial role in shaping the future of health care. However, little is known about the utilization of, perceptions on, and intention to use AI chatbots among medical students in China.
Objective: This study aims to explore the utilization of, perceptions on, and intention to use generative AI chatbots among medical students in China, using the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. By conducting a national cross-sectional survey, we sought to identify the key determinants that influence medical students' acceptance of AI chatbots, thereby providing a basis for enhancing their integration into medical education. Understanding these factors is crucial for educators, policy makers, and technology developers to design and implement effective AI-driven educational tools that align with the needs and expectations of future health care professionals.
Methods: A web-based electronic survey questionnaire was developed and distributed via social media to medical students across the country. The UTAUT was used as a theoretical framework to design the questionnaire and analyze the data. The relationship between behavioral intention to use AI chatbots and UTAUT predictors was examined using multivariable regression.
Results: A total of 693 participants were from 57 universities covering 21 provinces or municipalities in China. Only a minority (199/693, 28.72%) reported using AI chatbots for studying, with ChatGPT (129/693, 18.61%) being the most commonly used. Most of the participants used AI chatbots for quickly obtaining medical information and knowledge (631/693, 91.05%) and increasing learning efficiency (594/693, 85.71%). Utilization behavior, social influence, facilitating conditions, perceived risk, and personal innovativeness showed significant positive associations with the behavioral intention to use AI chatbots (all P values were <.05).
Conclusions: Chinese medical students hold positive perceptions toward and high intentions to use AI chatbots, but there are gaps between intention and actual adoption. This highlights the need for strategies to improve access, training, and support and provide peer usage examples to fully harness the potential benefits of chatbot technology.
背景:人工智能(AI)聊天机器人将对医学教育产生深远影响:人工智能(AI)聊天机器人将对医学教育产生深远影响。医学生作为技术的早期采用者和未来的医疗服务提供者,在塑造医疗保健的未来方面发挥着至关重要的作用。然而,人们对中国医学生使用人工智能聊天机器人的情况、看法和意向知之甚少:本研究旨在采用技术接受和使用统一理论(UTAUT)框架,探讨中国医学生对生成式人工智能聊天机器人的使用情况、看法和使用意向。通过开展全国横断面调查,我们试图找出影响医学生接受人工智能聊天机器人的关键决定因素,从而为加强人工智能聊天机器人与医学教育的结合提供依据。了解这些因素对于教育者、政策制定者和技术开发者设计和实施有效的人工智能驱动的教育工具至关重要,这些工具应符合未来医疗专业人员的需求和期望:方法:我们开发了一个基于网络的电子调查问卷,并通过社交媒体向全国各地的医学生发放。UTAUT作为设计问卷和分析数据的理论框架。使用多元回归法研究了使用人工智能聊天机器人的行为意向与UTAUT预测因素之间的关系:共有 693 名参与者来自中国 21 个省市的 57 所高校。只有少数人(199/693,28.72%)表示在学习中使用了人工智能聊天机器人,其中最常用的是 ChatGPT(129/693,18.61%)。大多数参与者使用人工智能聊天机器人来快速获取医疗信息和知识(631/693,91.05%)以及提高学习效率(594/693,85.71%)。使用行为、社会影响、便利条件、感知风险和个人创新性与使用人工智能聊天机器人的行为意向呈显著正相关(所有 P 值均为结论):中国医学生对人工智能聊天机器人持有积极的看法和较高的使用意愿,但在意愿和实际采用之间存在差距。这凸显出需要制定策略来改善获取、培训和支持,并提供同行使用范例,以充分利用聊天机器人技术的潜在优势。
{"title":"Utilization of, Perceptions on, and Intention to Use AI Chatbots Among Medical Students in China: National Cross-Sectional Study.","authors":"Wenjuan Tao, Jinming Yang, Xing Qu","doi":"10.2196/57132","DOIUrl":"10.2196/57132","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) chatbots are poised to have a profound impact on medical education. Medical students, as early adopters of technology and future health care providers, play a crucial role in shaping the future of health care. However, little is known about the utilization of, perceptions on, and intention to use AI chatbots among medical students in China.</p><p><strong>Objective: </strong>This study aims to explore the utilization of, perceptions on, and intention to use generative AI chatbots among medical students in China, using the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. By conducting a national cross-sectional survey, we sought to identify the key determinants that influence medical students' acceptance of AI chatbots, thereby providing a basis for enhancing their integration into medical education. Understanding these factors is crucial for educators, policy makers, and technology developers to design and implement effective AI-driven educational tools that align with the needs and expectations of future health care professionals.</p><p><strong>Methods: </strong>A web-based electronic survey questionnaire was developed and distributed via social media to medical students across the country. The UTAUT was used as a theoretical framework to design the questionnaire and analyze the data. The relationship between behavioral intention to use AI chatbots and UTAUT predictors was examined using multivariable regression.</p><p><strong>Results: </strong>A total of 693 participants were from 57 universities covering 21 provinces or municipalities in China. Only a minority (199/693, 28.72%) reported using AI chatbots for studying, with ChatGPT (129/693, 18.61%) being the most commonly used. Most of the participants used AI chatbots for quickly obtaining medical information and knowledge (631/693, 91.05%) and increasing learning efficiency (594/693, 85.71%). Utilization behavior, social influence, facilitating conditions, perceived risk, and personal innovativeness showed significant positive associations with the behavioral intention to use AI chatbots (all P values were <.05).</p><p><strong>Conclusions: </strong>Chinese medical students hold positive perceptions toward and high intentions to use AI chatbots, but there are gaps between intention and actual adoption. This highlights the need for strategies to improve access, training, and support and provide peer usage examples to fully harness the potential benefits of chatbot technology.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11533383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Critical evaluation of naloxone coprescription academic detailing programs has been positive, but little research has focused on how participant thinking changes during academic detailing.
Objective: The dual purposes of this study were to (1) present a metacognitive evaluation of a naloxone coprescription academic detailing intervention and (2) describe the application of a metacognitive evaluation for future medical education interventions.
Methods: Data were obtained from a pre-post knowledge assessment of a web-based, self-paced intervention designed to increase knowledge of clinical and organizational best practices for the coprescription of naloxone. To assess metacognition, items were designed with confidence-weighted true-false scoring. Multiple metacognitive scores were calculated: 3 content knowledge scores and 5 confidence-weighted true-false scores. Statistical analysis examined whether there were significant differences in scores before and after intervention. Analysis of overall content knowledge showed significant improvement at posttest.
Results: There was a significant positive increase in absolute accuracy of participant confidence judgments, confidence in correct probability, and confidence in incorrect probability (all P values were <.05). Overall, results suggest an improvement in content knowledge scores after intervention and, metacognitively, suggest that individuals were more confident in their answer choices, regardless of correctness.
Conclusions: Implications include the potential application of metacognitive evaluations to assess nuances in learner performance during academic detailing interventions and as a feedback mechanism to reinforce learning and guide curricular design.
背景:对纳洛酮处方学术细化项目的批判性评价是积极的,但很少有研究关注参与者在学术细化过程中的思维变化:本研究的双重目的是:(1) 对纳洛酮复方制剂学术细化干预进行元认知评估;(2) 描述元认知评估在未来医学教育干预中的应用:方法:我们对一项基于网络、自定进度的干预措施进行了前后期知识评估,旨在增加纳洛酮共同处方的临床和组织最佳实践知识。为评估元认知,设计了置信度加权真假计分项目。计算了多个元认知分数:3 个内容知识得分和 5 个信心加权真假得分。统计分析检验了干预前后的得分是否存在显著差异。对整体内容知识的分析表明,干预后的成绩有明显提高:结果:受试者信心判断的绝对准确性、对正确概率的信心和对错误概率的信心都有明显的正增长(所有 P 值均为结论):结论:元认知评估可用于评估学习者在学术细节干预过程中的细微差别,也可作为强化学习和指导课程设计的反馈机制。
{"title":"Naloxone Coprescribing and the Prevention of Opioid Overdoses: Quasi-Experimental Metacognitive Assessment of a Novel Education Initiative.","authors":"Michael Enich, Cory Morton, Richard Jermyn","doi":"10.2196/54280","DOIUrl":"10.2196/54280","url":null,"abstract":"<p><strong>Background: </strong>Critical evaluation of naloxone coprescription academic detailing programs has been positive, but little research has focused on how participant thinking changes during academic detailing.</p><p><strong>Objective: </strong>The dual purposes of this study were to (1) present a metacognitive evaluation of a naloxone coprescription academic detailing intervention and (2) describe the application of a metacognitive evaluation for future medical education interventions.</p><p><strong>Methods: </strong>Data were obtained from a pre-post knowledge assessment of a web-based, self-paced intervention designed to increase knowledge of clinical and organizational best practices for the coprescription of naloxone. To assess metacognition, items were designed with confidence-weighted true-false scoring. Multiple metacognitive scores were calculated: 3 content knowledge scores and 5 confidence-weighted true-false scores. Statistical analysis examined whether there were significant differences in scores before and after intervention. Analysis of overall content knowledge showed significant improvement at posttest.</p><p><strong>Results: </strong>There was a significant positive increase in absolute accuracy of participant confidence judgments, confidence in correct probability, and confidence in incorrect probability (all P values were <.05). Overall, results suggest an improvement in content knowledge scores after intervention and, metacognitively, suggest that individuals were more confident in their answer choices, regardless of correctness.</p><p><strong>Conclusions: </strong>Implications include the potential application of metacognitive evaluations to assess nuances in learner performance during academic detailing interventions and as a feedback mechanism to reinforce learning and guide curricular design.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11534273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Clavier, Emma Chevalier, Zoé Demailly, Benoit Veber, Imad-Abdelkader Messaadi, Benjamin Popoff
Background: Social media (SoMe) have taken a major place in the medical field, and younger generations are increasingly using them as their primary source to find information.
Objective: This study aimed to describe the use of SoMe for medical education among French medical students and assess the prevalence of smartphone addiction in this population.
Methods: A cross-sectional web-based survey was conducted among French medical students (second to sixth year of study). The questionnaire collected information on SoMe use for medical education and professional behavior. Smartphone addiction was assessed using the Smartphone Addiction Scale Short-Version (SAS-SV) score.
Results: A total of 762 medical students responded to the survey. Of these, 762 (100%) were SoMe users, spending a median of 120 (IQR 60-150) minutes per day on SoMe; 656 (86.1%) used SoMe for medical education, with YouTube, Instagram, and Facebook being the most popular platforms. The misuse of SoMe in a professional context was also identified; 27.2% (207/762) of students posted hospital internship content, and 10.8% (82/762) searched for a patient's name on SoMe. Smartphone addiction was prevalent among 29.1% (222/762) of respondents, with a significant correlation between increased SoMe use and SAS-SV score (r=0.39, 95% CI 0.33-0.45; P<.001). Smartphone-addicted students reported a higher impact on study time (211/222, 95% vs 344/540, 63.6%; P<.001) and a greater tendency to share hospital internship content on social networks (78/222, 35.1% vs 129/540, 23.8%; P=.002).
Conclusions: Our findings reveal the extensive use of SoMe for medical education among French medical students, alongside a notable prevalence of smartphone addiction. These results highlight the need for medical schools and educators to address the responsible use of SoMe and develop strategies to mitigate the risks associated with excessive use and addiction.
背景:社交媒体(SoMe)已在医学领域占据重要地位,年轻一代越来越多地将其作为查找信息的主要来源:本研究旨在描述法国医科学生在医学教育中使用 SoMe 的情况,并评估这一人群中智能手机成瘾的普遍程度:对法国医科学生(二年级至六年级)进行了一项横断面网络调查。调查问卷收集了有关在医学教育和职业行为中使用智能手机的信息。使用智能手机成瘾量表简易版(SAS-SV)对智能手机成瘾进行评估:共有 762 名医学生回答了调查。其中762人(100%)是SoMe用户,每天花在SoMe上的时间中位数为120分钟(IQR为60-150分钟);656人(86.1%)将SoMe用于医学教育,其中YouTube、Instagram和Facebook是最受欢迎的平台。还发现了在专业背景下滥用SoMe的情况;27.2%(207/762)的学生在SoMe上发布医院实习内容,10.8%(82/762)的学生在SoMe上搜索病人姓名。29.1%的受访者(222/762)普遍对智能手机上瘾,SoMe使用率的增加与SAS-SV得分之间存在显著相关性(r=0.39,95% CI 0.33-0.45;PConclusions:我们的研究结果表明,法国医科学生在医学教育中广泛使用SoMe,同时智能手机成瘾现象也很普遍。这些结果突出表明,医学院校和教育工作者有必要解决负责任地使用SoMe的问题,并制定策略来降低过度使用和上瘾带来的风险。
{"title":"Social Media Usage for Medical Education and Smartphone Addiction Among Medical Students: National Web-Based Survey.","authors":"Thomas Clavier, Emma Chevalier, Zoé Demailly, Benoit Veber, Imad-Abdelkader Messaadi, Benjamin Popoff","doi":"10.2196/55149","DOIUrl":"10.2196/55149","url":null,"abstract":"<p><strong>Background: </strong>Social media (SoMe) have taken a major place in the medical field, and younger generations are increasingly using them as their primary source to find information.</p><p><strong>Objective: </strong>This study aimed to describe the use of SoMe for medical education among French medical students and assess the prevalence of smartphone addiction in this population.</p><p><strong>Methods: </strong>A cross-sectional web-based survey was conducted among French medical students (second to sixth year of study). The questionnaire collected information on SoMe use for medical education and professional behavior. Smartphone addiction was assessed using the Smartphone Addiction Scale Short-Version (SAS-SV) score.</p><p><strong>Results: </strong>A total of 762 medical students responded to the survey. Of these, 762 (100%) were SoMe users, spending a median of 120 (IQR 60-150) minutes per day on SoMe; 656 (86.1%) used SoMe for medical education, with YouTube, Instagram, and Facebook being the most popular platforms. The misuse of SoMe in a professional context was also identified; 27.2% (207/762) of students posted hospital internship content, and 10.8% (82/762) searched for a patient's name on SoMe. Smartphone addiction was prevalent among 29.1% (222/762) of respondents, with a significant correlation between increased SoMe use and SAS-SV score (r=0.39, 95% CI 0.33-0.45; P<.001). Smartphone-addicted students reported a higher impact on study time (211/222, 95% vs 344/540, 63.6%; P<.001) and a greater tendency to share hospital internship content on social networks (78/222, 35.1% vs 129/540, 23.8%; P=.002).</p><p><strong>Conclusions: </strong>Our findings reveal the extensive use of SoMe for medical education among French medical students, alongside a notable prevalence of smartphone addiction. These results highlight the need for medical schools and educators to address the responsible use of SoMe and develop strategies to mitigate the risks associated with excessive use and addiction.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paulina M Devlin, Oluwabukola Akingbola, Jody Stonehocker, James T Fitzgerald, Abigail Ford Winkel, Maya M Hammoud, Helen K Morgan
<p><strong>Background: </strong>As part of the residency application process in the United States, many medical specialties now offer applicants the opportunity to send program signals that indicate high interest to a limited number of residency programs. To determine which residency programs to apply to, and which programs to send signals to, applicants need accurate information to determine which programs align with their future training goals. Most applicants use a program's website to review program characteristics and criteria, so describing the current state of residency program websites can inform programs of best practices.</p><p><strong>Objective: </strong>This study aims to characterize information available on obstetrics and gynecology residency program websites and to determine whether there are differences in information available between different types of residency programs.</p><p><strong>Methods: </strong>This was a cross-sectional observational study of all US obstetrics and gynecology residency program website content. The authorship group identified factors that would be useful for residency applicants around program demographics and learner trajectories; application criteria including standardized testing metrics, residency statistics, and benefits; and diversity, equity, and inclusion mission statements and values. Two authors examined all available websites from November 2011 through March 2022. Data analysis consisted of descriptive statistics and one-way ANOVA, with P<.05 considered significant.</p><p><strong>Results: </strong>Among 290 programs, 283 (97.6%) had websites; 238 (82.1%) listed medical schools of current residents; 158 (54.5%) described residency alumni trajectories; 107 (36.9%) included guidance related to the preferred United States Medical Licensing Examination Step 1 scores; 53 (18.3%) included guidance related to the Comprehensive Osteopathic Medical Licensing Examination Level 1 scores; 185 (63.8%) included international applicant guidance; 132 (45.5%) included a program-specific mission statement; 84 (29%) included a diversity, equity, and inclusion statement; and 167 (57.6%) included program-specific media or links to program social media on their websites. University-based programs were more likely to include a variety of information compared to community-based university-affiliated and community-based programs, including medical schools of current residents (113/123, 91.9%, university-based; 85/111, 76.6%, community-based university-affiliated; 40/56, 71.4%, community-based; P<.001); alumni trajectories (90/123, 73.2%, university-based; 51/111, 45.9%, community-based university-affiliated; 17/56, 30.4%, community-based; P<.001); the United States Medical Licensing Examination Step 1 score guidance (58/123, 47.2%, university-based; 36/111, 32.4%, community-based university-affiliated; 13/56, 23.2%, community-based; P=.004); and diversity, equity, and inclusion statements (57/123, 46.3%, university-bas
{"title":"Opportunities to Improve Communication With Residency Applicants: Cross-Sectional Study of Obstetrics and Gynecology Residency Program Websites.","authors":"Paulina M Devlin, Oluwabukola Akingbola, Jody Stonehocker, James T Fitzgerald, Abigail Ford Winkel, Maya M Hammoud, Helen K Morgan","doi":"10.2196/48518","DOIUrl":"10.2196/48518","url":null,"abstract":"<p><strong>Background: </strong>As part of the residency application process in the United States, many medical specialties now offer applicants the opportunity to send program signals that indicate high interest to a limited number of residency programs. To determine which residency programs to apply to, and which programs to send signals to, applicants need accurate information to determine which programs align with their future training goals. Most applicants use a program's website to review program characteristics and criteria, so describing the current state of residency program websites can inform programs of best practices.</p><p><strong>Objective: </strong>This study aims to characterize information available on obstetrics and gynecology residency program websites and to determine whether there are differences in information available between different types of residency programs.</p><p><strong>Methods: </strong>This was a cross-sectional observational study of all US obstetrics and gynecology residency program website content. The authorship group identified factors that would be useful for residency applicants around program demographics and learner trajectories; application criteria including standardized testing metrics, residency statistics, and benefits; and diversity, equity, and inclusion mission statements and values. Two authors examined all available websites from November 2011 through March 2022. Data analysis consisted of descriptive statistics and one-way ANOVA, with P<.05 considered significant.</p><p><strong>Results: </strong>Among 290 programs, 283 (97.6%) had websites; 238 (82.1%) listed medical schools of current residents; 158 (54.5%) described residency alumni trajectories; 107 (36.9%) included guidance related to the preferred United States Medical Licensing Examination Step 1 scores; 53 (18.3%) included guidance related to the Comprehensive Osteopathic Medical Licensing Examination Level 1 scores; 185 (63.8%) included international applicant guidance; 132 (45.5%) included a program-specific mission statement; 84 (29%) included a diversity, equity, and inclusion statement; and 167 (57.6%) included program-specific media or links to program social media on their websites. University-based programs were more likely to include a variety of information compared to community-based university-affiliated and community-based programs, including medical schools of current residents (113/123, 91.9%, university-based; 85/111, 76.6%, community-based university-affiliated; 40/56, 71.4%, community-based; P<.001); alumni trajectories (90/123, 73.2%, university-based; 51/111, 45.9%, community-based university-affiliated; 17/56, 30.4%, community-based; P<.001); the United States Medical Licensing Examination Step 1 score guidance (58/123, 47.2%, university-based; 36/111, 32.4%, community-based university-affiliated; 13/56, 23.2%, community-based; P=.004); and diversity, equity, and inclusion statements (57/123, 46.3%, university-bas","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11516266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}