Background: Although digital health is essential for improving health care, its adoption remains slow due to the lack of literacy in this area. Therefore, it is crucial for health professionals to acquire digital skills and for a digital competence assessment and accreditation model to be implemented to make advances in this field.
Objective: This study had two objectives: (1) to create a specific map of digital competences for health professionals and (2) to define and test a digital competence assessment and accreditation model for health professionals.
Methods: We took an iterative mixed methods approach, which included a review of the gray literature and consultation with local experts. We used the arithmetic mean and SD in descriptive statistics, P values in hypothesis testing and subgroup comparisons, the greatest lower bound in test diagnosis, and the discrimination index in study instrument analysis.
Results: The assessment model designed in accordance with the competence content defined in the map of digital competences and based on scenarios had excellent internal consistency overall (greatest lower bound=0.91). Although most study participants (110/122, 90.2%) reported an intermediate self-perceived digital competence level, we found that the vast majority would not attain a level-2 Accreditation of Competence in Information and Communication Technologies.
Conclusions: Knowing the digital competence level of health professionals based on a defined competence framework should enable such professionals to be trained and updated to meet real needs in their specific professional contexts and, consequently, take full advantage of the potential of digital technologies. These results have informed the Health Plan for Catalonia 2021-2025, thus laying the foundations for creating and offering specific training to assess and certify the digital competence of such professionals.
背景:尽管数字医疗对改善医疗保健至关重要,但由于缺乏这方面的知识,其应用仍然缓慢。因此,卫生专业人员必须掌握数字技能,并实施数字能力评估和认证模式,以推动这一领域的发展:本研究有两个目标:(1)为卫生专业人员绘制具体的数字能力地图;(2)确定并测试卫生专业人员的数字能力评估和认证模式:我们采用了迭代混合方法,包括查阅灰色文献和咨询当地专家。我们在描述性统计中使用算术平均数和标度,在假设检验和亚组比较中使用 P 值,在测试诊断中使用最大下限,在研究工具分析中使用区分度指数:根据数字能力地图中定义的能力内容设计的基于情景的评估模型总体上具有极好的内部一致性(最大下限=0.91)。虽然大多数研究参与者(110/122,90.2%)报告的自我认知数字能力水平处于中等水平,但我们发现绝大多数人无法达到信息和通信技术能力二级认证:在确定的能力框架基础上了解卫生专业人员的数字化能力水平,应能使这些专业人员得到培训和更新,以满足其特定专业背景下的实际需求,从而充分利用数字化技术的潜力。这些结果为《2021-2025 年加泰罗尼亚健康计划》提供了信息,从而为创建和提供专门培训以评估和认证此类专业人员的数字化能力奠定了基础。
{"title":"Design, Implementation, and Analysis of an Assessment and Accreditation Model to Evaluate a Digital Competence Framework for Health Professionals: Mixed Methods Study.","authors":"Francesc Saigí-Rubió, Teresa Romeu, Eulàlia Hernández Encuentra, Montse Guitert, Erik Andrés, Elisenda Reixach","doi":"10.2196/53462","DOIUrl":"10.2196/53462","url":null,"abstract":"<p><strong>Background: </strong>Although digital health is essential for improving health care, its adoption remains slow due to the lack of literacy in this area. Therefore, it is crucial for health professionals to acquire digital skills and for a digital competence assessment and accreditation model to be implemented to make advances in this field.</p><p><strong>Objective: </strong>This study had two objectives: (1) to create a specific map of digital competences for health professionals and (2) to define and test a digital competence assessment and accreditation model for health professionals.</p><p><strong>Methods: </strong>We took an iterative mixed methods approach, which included a review of the gray literature and consultation with local experts. We used the arithmetic mean and SD in descriptive statistics, P values in hypothesis testing and subgroup comparisons, the greatest lower bound in test diagnosis, and the discrimination index in study instrument analysis.</p><p><strong>Results: </strong>The assessment model designed in accordance with the competence content defined in the map of digital competences and based on scenarios had excellent internal consistency overall (greatest lower bound=0.91). Although most study participants (110/122, 90.2%) reported an intermediate self-perceived digital competence level, we found that the vast majority would not attain a level-2 Accreditation of Competence in Information and Communication Technologies.</p><p><strong>Conclusions: </strong>Knowing the digital competence level of health professionals based on a defined competence framework should enable such professionals to be trained and updated to meet real needs in their specific professional contexts and, consequently, take full advantage of the potential of digital technologies. These results have informed the Health Plan for Catalonia 2021-2025, thus laying the foundations for creating and offering specific training to assess and certify the digital competence of such professionals.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528169/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Wang, Liuying Yang, Min Li, Xinghe Zhang, Xiantao Tai
Background: Incremental advancements in artificial intelligence (AI) technology have facilitated its integration into various disciplines. In particular, the infusion of AI into medical education has emerged as a significant trend, with noteworthy research findings. Consequently, a comprehensive review and analysis of the current research landscape of AI in medical education is warranted.
Objective: This study aims to conduct a bibliometric analysis of pertinent papers, spanning the years 2013-2022, using CiteSpace and VOSviewer. The study visually represents the existing research status and trends of AI in medical education.
Methods: Articles related to AI and medical education, published between 2013 and 2022, were systematically searched in the Web of Science core database. Two reviewers scrutinized the initially retrieved papers, based on their titles and abstracts, to eliminate papers unrelated to the topic. The selected papers were then analyzed and visualized for country, institution, author, reference, and keywords using CiteSpace and VOSviewer.
Results: A total of 195 papers pertaining to AI in medical education were identified from 2013 to 2022. The annual publications demonstrated an increasing trend over time. The United States emerged as the most active country in this research arena, and Harvard Medical School and the University of Toronto were the most active institutions. Prolific authors in this field included Vincent Bissonnette, Charlotte Blacketer, Rolando F Del Maestro, Nicole Ledows, Nykan Mirchi, Alexander Winkler-Schwartz, and Recai Yilamaz. The paper with the highest citation was "Medical Students' Attitude Towards Artificial Intelligence: A Multicentre Survey." Keyword analysis revealed that "radiology," "medical physics," "ehealth," "surgery," and "specialty" were the primary focus, whereas "big data" and "management" emerged as research frontiers.
Conclusions: The study underscores the promising potential of AI in medical education research. Current research directions encompass radiology, medical information management, and other aspects. Technological progress is expected to broaden these directions further. There is an urgent need to bolster interregional collaboration and enhance research quality. These findings offer valuable insights for researchers to identify perspectives and guide future research directions.
{"title":"Medical Education and Artificial Intelligence: Web of Science-Based Bibliometric Analysis (2013-2022).","authors":"Shuang Wang, Liuying Yang, Min Li, Xinghe Zhang, Xiantao Tai","doi":"10.2196/51411","DOIUrl":"10.2196/51411","url":null,"abstract":"<p><strong>Background: </strong>Incremental advancements in artificial intelligence (AI) technology have facilitated its integration into various disciplines. In particular, the infusion of AI into medical education has emerged as a significant trend, with noteworthy research findings. Consequently, a comprehensive review and analysis of the current research landscape of AI in medical education is warranted.</p><p><strong>Objective: </strong>This study aims to conduct a bibliometric analysis of pertinent papers, spanning the years 2013-2022, using CiteSpace and VOSviewer. The study visually represents the existing research status and trends of AI in medical education.</p><p><strong>Methods: </strong>Articles related to AI and medical education, published between 2013 and 2022, were systematically searched in the Web of Science core database. Two reviewers scrutinized the initially retrieved papers, based on their titles and abstracts, to eliminate papers unrelated to the topic. The selected papers were then analyzed and visualized for country, institution, author, reference, and keywords using CiteSpace and VOSviewer.</p><p><strong>Results: </strong>A total of 195 papers pertaining to AI in medical education were identified from 2013 to 2022. The annual publications demonstrated an increasing trend over time. The United States emerged as the most active country in this research arena, and Harvard Medical School and the University of Toronto were the most active institutions. Prolific authors in this field included Vincent Bissonnette, Charlotte Blacketer, Rolando F Del Maestro, Nicole Ledows, Nykan Mirchi, Alexander Winkler-Schwartz, and Recai Yilamaz. The paper with the highest citation was \"Medical Students' Attitude Towards Artificial Intelligence: A Multicentre Survey.\" Keyword analysis revealed that \"radiology,\" \"medical physics,\" \"ehealth,\" \"surgery,\" and \"specialty\" were the primary focus, whereas \"big data\" and \"management\" emerged as research frontiers.</p><p><strong>Conclusions: </strong>The study underscores the promising potential of AI in medical education research. Current research directions encompass radiology, medical information management, and other aspects. Technological progress is expected to broaden these directions further. There is an urgent need to bolster interregional collaboration and enhance research quality. These findings offer valuable insights for researchers to identify perspectives and guide future research directions.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486481/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Miao, Charat Thongprayoon, Oscar Garcia Valencia, Iasmina M Craici, Wisit Cheungpasitporn
Background: The 2024 Nephrology fellowship match data show the declining interest in nephrology in the United States, with an 11% drop in candidates and a mere 66% (321/488) of positions filled.
Objective: The study aims to discern the factors influencing this trend using ChatGPT, a leading chatbot model, for insights into the comparative appeal of nephrology versus other internal medicine specialties.
Methods: Using the GPT-4 model, the study compared nephrology with 13 other internal medicine specialties, evaluating each on 7 criteria including intellectual complexity, work-life balance, procedural involvement, research opportunities, patient relationships, career demand, and financial compensation. Each criterion was assigned scores from 1 to 10, with the cumulative score determining the ranking. The approach included counteracting potential bias by instructing GPT-4 to favor other specialties over nephrology in reverse scenarios.
Results: GPT-4 ranked nephrology only above sleep medicine. While nephrology scored higher than hospice and palliative medicine, it fell short in key criteria such as work-life balance, patient relationships, and career demand. When examining the percentage of filled positions in the 2024 appointment year match, nephrology's filled rate was 66%, only higher than the 45% (155/348) filled rate of geriatric medicine. Nephrology's score decreased by 4%-14% in 5 criteria including intellectual challenge and complexity, procedural involvement, career opportunity and demand, research and academic opportunities, and financial compensation.
Conclusions: ChatGPT does not favor nephrology over most internal medicine specialties, highlighting its diminishing appeal as a career choice. This trend raises significant concerns, especially considering the overall physician shortage, and prompts a reevaluation of factors affecting specialty choice among medical residents.
{"title":"Navigating Nephrology's Decline Through a GPT-4 Analysis of Internal Medicine Specialties in the United States: Qualitative Study.","authors":"Jing Miao, Charat Thongprayoon, Oscar Garcia Valencia, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.2196/57157","DOIUrl":"10.2196/57157","url":null,"abstract":"<p><strong>Background: </strong>The 2024 Nephrology fellowship match data show the declining interest in nephrology in the United States, with an 11% drop in candidates and a mere 66% (321/488) of positions filled.</p><p><strong>Objective: </strong>The study aims to discern the factors influencing this trend using ChatGPT, a leading chatbot model, for insights into the comparative appeal of nephrology versus other internal medicine specialties.</p><p><strong>Methods: </strong>Using the GPT-4 model, the study compared nephrology with 13 other internal medicine specialties, evaluating each on 7 criteria including intellectual complexity, work-life balance, procedural involvement, research opportunities, patient relationships, career demand, and financial compensation. Each criterion was assigned scores from 1 to 10, with the cumulative score determining the ranking. The approach included counteracting potential bias by instructing GPT-4 to favor other specialties over nephrology in reverse scenarios.</p><p><strong>Results: </strong>GPT-4 ranked nephrology only above sleep medicine. While nephrology scored higher than hospice and palliative medicine, it fell short in key criteria such as work-life balance, patient relationships, and career demand. When examining the percentage of filled positions in the 2024 appointment year match, nephrology's filled rate was 66%, only higher than the 45% (155/348) filled rate of geriatric medicine. Nephrology's score decreased by 4%-14% in 5 criteria including intellectual challenge and complexity, procedural involvement, career opportunity and demand, research and academic opportunities, and financial compensation.</p><p><strong>Conclusions: </strong>ChatGPT does not favor nephrology over most internal medicine specialties, highlighting its diminishing appeal as a career choice. This trend raises significant concerns, especially considering the overall physician shortage, and prompts a reevaluation of factors affecting specialty choice among medical residents.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486450/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Hofstetter, Max Zilezinski, Dominik Behr, Bernhard Kraft, Christian Buhtz, Denny Paulicke, Anja Wolf, Christina Klus, Dietrich Stoevesandt, Karsten Schwarz, Patrick Jahn
Background: Current challenges in patient care have increased research on technology use in nursing and health care. Digital assistive technologies (DATs) are one option that can be incorporated into care processes. However, how the application of DATs should be introduced to nurses and care professionals must be clarified. No structured and effective education concepts for the patient-oriented integration of DATs in the nursing sector are currently available.
Objective: This study aims to examine how a structured and guided integration and education concept, herein termed the sensitization, evaluative introduction, qualification, and implementation (SEQI) education concept, can support the integration of DATs into nursing practices.
Methods: This study used an explanatory, sequential study design with a mixed methods approach. The SEQI intervention was run in 26 long-term care facilities oriented toward older adults in Germany after a 5-day training course in each. The participating care professionals were asked to test 1 of 6 DATs in real-world practice over 3 days. Surveys (n=112) were then administered that recorded the intention to use DATs at 3 measurement points, and guided qualitative interviews with care professionals (n=12) were conducted to evaluate the learning concepts and effects of the intervention.
Results: As this was a pilot study, no sample size calculation was carried out, and P values were not reported. The participating care professionals were generally willing to integrate DATs-as an additional resource-into nursing processes even before the 4-stage SEQI intervention was presented. However, the intervention provided additional background knowledge and sensitized care professionals to the digital transformation, enabling them to evaluate how DATs fit in the health care sector, what qualifies these technologies for correct application, and what promotes their use. The care professionals expressed specific ideas and requirements for both technology-related education concepts and nursing DATs.
Conclusions: Actively matching technical support, physical limitations, and patients' needs is crucial when selecting DATs and integrating them into nursing processes. To this end, using a structured process such as SEQI that strengthens care professionals' ability to integrate DATs can help improve the benefits of such technology in the health care setting. Practical, application-oriented learning can promote the long-term implementation of DATs.
背景:当前病人护理面临的挑战增加了护理和医疗保健技术应用方面的研究。数字辅助技术(DAT)是一种可纳入护理流程的选择。然而,必须明确如何向护士和护理专业人员介绍如何应用 DAT。目前,护理行业还没有以患者为导向整合 DATs 的结构化有效教育理念:本研究旨在探讨一种结构化和指导性的整合与教育理念(在此称为 "宣传、评估性介绍、鉴定和实施"(SEQI)教育理念)如何支持将 DATs 整合到护理实践中:本研究采用了解释性、顺序研究设计和混合方法。在德国的 26 家面向老年人的长期护理机构中,每家机构在接受了为期 5 天的培训后,都开展了 SEQI 干预活动。参加培训的护理专业人员被要求在 3 天的实际操作中测试 6 个 DAT 中的 1 个。然后进行了调查(112 人),记录了在 3 个测量点使用 DAT 的意向,并对护理专业人员(12 人)进行了定性访谈,以评估学习理念和干预效果:由于这是一项试点研究,因此没有计算样本量,也没有报告 P 值。参与研究的护理专业人员普遍愿意将 DATs 作为一种额外资源纳入护理流程,甚至在 4 阶段 SEQI 干预介绍之前就已如此。然而,干预措施提供了更多的背景知识,使护理专业人员对数字化转型更加敏感,使他们能够评估 DAT 在医疗保健领域的适应性、正确应用这些技术的条件以及促进其使用的因素。护理专业人员对与技术相关的教育理念和护理 DAT 表达了具体的想法和要求:结论:在选择 DAT 并将其融入护理流程时,积极匹配技术支持、身体限制和患者需求至关重要。为此,使用 SEQI 等结构化流程来加强护理专业人员整合 DAT 的能力,有助于提高此类技术在医疗环境中的效益。以实际应用为导向的学习可以促进 DAT 的长期实施。
{"title":"Integrating Digital Assistive Technologies Into Care Processes: Mixed Methods Study.","authors":"Sebastian Hofstetter, Max Zilezinski, Dominik Behr, Bernhard Kraft, Christian Buhtz, Denny Paulicke, Anja Wolf, Christina Klus, Dietrich Stoevesandt, Karsten Schwarz, Patrick Jahn","doi":"10.2196/54083","DOIUrl":"10.2196/54083","url":null,"abstract":"<p><strong>Background: </strong>Current challenges in patient care have increased research on technology use in nursing and health care. Digital assistive technologies (DATs) are one option that can be incorporated into care processes. However, how the application of DATs should be introduced to nurses and care professionals must be clarified. No structured and effective education concepts for the patient-oriented integration of DATs in the nursing sector are currently available.</p><p><strong>Objective: </strong>This study aims to examine how a structured and guided integration and education concept, herein termed the sensitization, evaluative introduction, qualification, and implementation (SEQI) education concept, can support the integration of DATs into nursing practices.</p><p><strong>Methods: </strong>This study used an explanatory, sequential study design with a mixed methods approach. The SEQI intervention was run in 26 long-term care facilities oriented toward older adults in Germany after a 5-day training course in each. The participating care professionals were asked to test 1 of 6 DATs in real-world practice over 3 days. Surveys (n=112) were then administered that recorded the intention to use DATs at 3 measurement points, and guided qualitative interviews with care professionals (n=12) were conducted to evaluate the learning concepts and effects of the intervention.</p><p><strong>Results: </strong>As this was a pilot study, no sample size calculation was carried out, and P values were not reported. The participating care professionals were generally willing to integrate DATs-as an additional resource-into nursing processes even before the 4-stage SEQI intervention was presented. However, the intervention provided additional background knowledge and sensitized care professionals to the digital transformation, enabling them to evaluate how DATs fit in the health care sector, what qualifies these technologies for correct application, and what promotes their use. The care professionals expressed specific ideas and requirements for both technology-related education concepts and nursing DATs.</p><p><strong>Conclusions: </strong>Actively matching technical support, physical limitations, and patients' needs is crucial when selecting DATs and integrating them into nursing processes. To this end, using a structured process such as SEQI that strengthens care professionals' ability to integrate DATs can help improve the benefits of such technology in the health care setting. Practical, application-oriented learning can promote the long-term implementation of DATs.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11499723/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rola Khamisy-Farah, Eden Biras, Rabie Shehadeh, Ruba Tuma, Hisham Atwan, Anna Siri, Manlio Converti, Francesco Chirico, Łukasz Szarpak, Carlo Biz, Raymond Farah, Nicola Bragazzi
Background: The integration of gender and sexuality awareness in health care is increasingly recognized as vital for patient outcomes. Despite this, there is a notable lack of comprehensive data on the current state of physicians' training and perceptions in these areas, leading to a gap in targeted educational interventions and optimal health care delivery.
Objective: The study's aim was to explore the experiences and perceptions of attending and resident physicians regarding the inclusion of gender and sexuality content in medical school curricula and professional practice in Israel.
Methods: This cross-sectional survey targeted a diverse group of physicians across various specializations and experience levels. Distributed through Israeli Medical Associations and professional networks, it included sections on experiences with gender and sexuality content, perceptions of knowledge, the impact of medical school curricula on professional capabilities, and views on integrating gender medicine in medical education. Descriptive and correlational analyses, along with gender-based and medical status-based comparisons, were used, complemented, and enhanced by qualitative analysis of participants' replies.
Results: The survey, encompassing 189 respondents, revealed low-to-moderate exposure to gender and sexuality content in medical school curricula, with a similar perception of preparedness. A need for more comprehensive training was widely recognized. The majority valued training in these areas for enhancing professional capabilities, identifying 10 essential gender-related knowledge areas. The preference for integrating gender medicine throughout medical education was significant. Gender-based analysis indicated variations in exposure and perceptions.
Conclusions: The study highlights a crucial need for the inclusion of gender and sexuality awareness in medical education and practice. It suggests the necessity for curriculum development, targeted training programs, policy advocacy, mentorship initiatives, and research to evaluate the effectiveness of these interventions. The findings serve as a foundation for future directions in medical education, aiming for a more inclusive, aware, and prepared medical workforce.
{"title":"Gender and Sexuality Awareness in Medical Education and Practice: Mixed Methods Study.","authors":"Rola Khamisy-Farah, Eden Biras, Rabie Shehadeh, Ruba Tuma, Hisham Atwan, Anna Siri, Manlio Converti, Francesco Chirico, Łukasz Szarpak, Carlo Biz, Raymond Farah, Nicola Bragazzi","doi":"10.2196/59009","DOIUrl":"10.2196/59009","url":null,"abstract":"<p><strong>Background: </strong>The integration of gender and sexuality awareness in health care is increasingly recognized as vital for patient outcomes. Despite this, there is a notable lack of comprehensive data on the current state of physicians' training and perceptions in these areas, leading to a gap in targeted educational interventions and optimal health care delivery.</p><p><strong>Objective: </strong>The study's aim was to explore the experiences and perceptions of attending and resident physicians regarding the inclusion of gender and sexuality content in medical school curricula and professional practice in Israel.</p><p><strong>Methods: </strong>This cross-sectional survey targeted a diverse group of physicians across various specializations and experience levels. Distributed through Israeli Medical Associations and professional networks, it included sections on experiences with gender and sexuality content, perceptions of knowledge, the impact of medical school curricula on professional capabilities, and views on integrating gender medicine in medical education. Descriptive and correlational analyses, along with gender-based and medical status-based comparisons, were used, complemented, and enhanced by qualitative analysis of participants' replies.</p><p><strong>Results: </strong>The survey, encompassing 189 respondents, revealed low-to-moderate exposure to gender and sexuality content in medical school curricula, with a similar perception of preparedness. A need for more comprehensive training was widely recognized. The majority valued training in these areas for enhancing professional capabilities, identifying 10 essential gender-related knowledge areas. The preference for integrating gender medicine throughout medical education was significant. Gender-based analysis indicated variations in exposure and perceptions.</p><p><strong>Conclusions: </strong>The study highlights a crucial need for the inclusion of gender and sexuality awareness in medical education and practice. It suggests the necessity for curriculum development, targeted training programs, policy advocacy, mentorship initiatives, and research to evaluate the effectiveness of these interventions. The findings serve as a foundation for future directions in medical education, aiming for a more inclusive, aware, and prepared medical workforce.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496915/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony James Goodings, Sten Kajitani, Allison Chhor, Ahmad Albakri, Mila Pastrak, Megha Kodancha, Rowan Ives, Yoo Bin Lee, Kari Kajitani
Background: This research explores the capabilities of ChatGPT-4 in passing the American Board of Family Medicine (ABFM) Certification Examination. Addressing a gap in existing literature, where earlier artificial intelligence (AI) models showed limitations in medical board examinations, this study evaluates the enhanced features and potential of ChatGPT-4, especially in document analysis and information synthesis.
Objective: The primary goal is to assess whether ChatGPT-4, when provided with extensive preparation resources and when using sophisticated data analysis, can achieve a score equal to or above the passing threshold for the Family Medicine Board Examinations.
Methods: In this study, ChatGPT-4 was embedded in a specialized subenvironment, "AI Family Medicine Board Exam Taker," designed to closely mimic the conditions of the ABFM Certification Examination. This subenvironment enabled the AI to access and analyze a range of relevant study materials, including a primary medical textbook and supplementary web-based resources. The AI was presented with a series of ABFM-type examination questions, reflecting the breadth and complexity typical of the examination. Emphasis was placed on assessing the AI's ability to interpret and respond to these questions accurately, leveraging its advanced data processing and analysis capabilities within this controlled subenvironment.
Results: In our study, ChatGPT-4's performance was quantitatively assessed on 300 practice ABFM examination questions. The AI achieved a correct response rate of 88.67% (95% CI 85.08%-92.25%) for the Custom Robot version and 87.33% (95% CI 83.57%-91.10%) for the Regular version. Statistical analysis, including the McNemar test (P=.45), indicated no significant difference in accuracy between the 2 versions. In addition, the chi-square test for error-type distribution (P=.32) revealed no significant variation in the pattern of errors across versions. These results highlight ChatGPT-4's capacity for high-level performance and consistency in responding to complex medical examination questions under controlled conditions.
Conclusions: The study demonstrates that ChatGPT-4, particularly when equipped with specialized preparation and when operating in a tailored subenvironment, shows promising potential in handling the intricacies of medical board examinations. While its performance is comparable with the expected standards for passing the ABFM Certification Examination, further enhancements in AI technology and tailored training methods could push these capabilities to new heights. This exploration opens avenues for integrating AI tools such as ChatGPT-4 in medical education and assessment, emphasizing the importance of continuous advancement and specialized training in medical applications of AI.
{"title":"Assessment of ChatGPT-4 in Family Medicine Board Examinations Using Advanced AI Learning and Analytical Methods: Observational Study.","authors":"Anthony James Goodings, Sten Kajitani, Allison Chhor, Ahmad Albakri, Mila Pastrak, Megha Kodancha, Rowan Ives, Yoo Bin Lee, Kari Kajitani","doi":"10.2196/56128","DOIUrl":"10.2196/56128","url":null,"abstract":"<p><strong>Background: </strong>This research explores the capabilities of ChatGPT-4 in passing the American Board of Family Medicine (ABFM) Certification Examination. Addressing a gap in existing literature, where earlier artificial intelligence (AI) models showed limitations in medical board examinations, this study evaluates the enhanced features and potential of ChatGPT-4, especially in document analysis and information synthesis.</p><p><strong>Objective: </strong>The primary goal is to assess whether ChatGPT-4, when provided with extensive preparation resources and when using sophisticated data analysis, can achieve a score equal to or above the passing threshold for the Family Medicine Board Examinations.</p><p><strong>Methods: </strong>In this study, ChatGPT-4 was embedded in a specialized subenvironment, \"AI Family Medicine Board Exam Taker,\" designed to closely mimic the conditions of the ABFM Certification Examination. This subenvironment enabled the AI to access and analyze a range of relevant study materials, including a primary medical textbook and supplementary web-based resources. The AI was presented with a series of ABFM-type examination questions, reflecting the breadth and complexity typical of the examination. Emphasis was placed on assessing the AI's ability to interpret and respond to these questions accurately, leveraging its advanced data processing and analysis capabilities within this controlled subenvironment.</p><p><strong>Results: </strong>In our study, ChatGPT-4's performance was quantitatively assessed on 300 practice ABFM examination questions. The AI achieved a correct response rate of 88.67% (95% CI 85.08%-92.25%) for the Custom Robot version and 87.33% (95% CI 83.57%-91.10%) for the Regular version. Statistical analysis, including the McNemar test (P=.45), indicated no significant difference in accuracy between the 2 versions. In addition, the chi-square test for error-type distribution (P=.32) revealed no significant variation in the pattern of errors across versions. These results highlight ChatGPT-4's capacity for high-level performance and consistency in responding to complex medical examination questions under controlled conditions.</p><p><strong>Conclusions: </strong>The study demonstrates that ChatGPT-4, particularly when equipped with specialized preparation and when operating in a tailored subenvironment, shows promising potential in handling the intricacies of medical board examinations. While its performance is comparable with the expected standards for passing the ABFM Certification Examination, further enhancements in AI technology and tailored training methods could push these capabilities to new heights. This exploration opens avenues for integrating AI tools such as ChatGPT-4 in medical education and assessment, emphasizing the importance of continuous advancement and specialized training in medical applications of AI.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11479358/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira
<p><strong>Background: </strong>In the field of research, psychological safety has been widely recognized as a contributing factor to improving the quality of care and patient safety. However, its consideration in the curricula and traineeship pathways of residents and health care students is scarce.</p><p><strong>Objective: </strong>This study aims to determine the extent to which health care trainees acquire psychological safety competencies during their internships in clinical settings and identify what measures can be taken to promote their learning.</p><p><strong>Methods: </strong>A mixed methods observational study based on a consensus conference and an open-ended survey among a sample of health care trainee mentors from health care institutions in a pan-European context was conducted. First, we administered an ad hoc questionnaire to assess the perceived degree of acquisition or implementation and significance of competencies (knowledge, attitudes, and skills) and institutional interventions in psychological safety. Second, we asked mentors to propose measures to foster among trainees those competencies that, in the first phase of the study, obtained an average acquisition score of <3.4 (scale of 1-5). A content analysis of the information collected was carried out, and the spontaneity of each category and theme was determined.</p><p><strong>Results: </strong>In total, 173 mentors from 11 pan-European countries completed the first questionnaire (response rate: 173/256, 67.6%), of which 63 (36.4%) participated in the second consultation. The competencies with the lowest acquisition level were related to warning a professional that their behavior posed a risk to the patient, managing their possible bad reaction, and offering support to a colleague who becomes a second victim. The mentors' proposals for improvement of this competency gap referred to training in communication skills and patient safety, safety culture, work climate, individual attitudes, a reference person for trainees, formal incorporation into the curricula of health care degrees and specialization pathways, specific systems and mechanisms to give trainees a voice, institutional risk management, regulations, guidelines and standards, supervision, and resources to support trainees. In terms of teaching methodology, the mentors recommended innovative strategies, many of them based on technological tools or solutions, including videos, seminars, lectures, workshops, simulation learning or role-playing with or without professional actors, case studies, videos with practical demonstrations or model situations, panel discussions, clinical sessions for joint analysis of patient safety incidents, and debriefings to set and discuss lessons learned.</p><p><strong>Conclusions: </strong>This study sought to promote psychological safety competencies as a formal part of the training of future health care professionals, facilitating the translation of international guidelines into practice
{"title":"Psychological Safety Competency Training During the Clinical Internship From the Perspective of Health Care Trainee Mentors in 11 Pan-European Countries: Mixed Methods Observational Study.","authors":"Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira","doi":"10.2196/64125","DOIUrl":"10.2196/64125","url":null,"abstract":"<p><strong>Background: </strong>In the field of research, psychological safety has been widely recognized as a contributing factor to improving the quality of care and patient safety. However, its consideration in the curricula and traineeship pathways of residents and health care students is scarce.</p><p><strong>Objective: </strong>This study aims to determine the extent to which health care trainees acquire psychological safety competencies during their internships in clinical settings and identify what measures can be taken to promote their learning.</p><p><strong>Methods: </strong>A mixed methods observational study based on a consensus conference and an open-ended survey among a sample of health care trainee mentors from health care institutions in a pan-European context was conducted. First, we administered an ad hoc questionnaire to assess the perceived degree of acquisition or implementation and significance of competencies (knowledge, attitudes, and skills) and institutional interventions in psychological safety. Second, we asked mentors to propose measures to foster among trainees those competencies that, in the first phase of the study, obtained an average acquisition score of <3.4 (scale of 1-5). A content analysis of the information collected was carried out, and the spontaneity of each category and theme was determined.</p><p><strong>Results: </strong>In total, 173 mentors from 11 pan-European countries completed the first questionnaire (response rate: 173/256, 67.6%), of which 63 (36.4%) participated in the second consultation. The competencies with the lowest acquisition level were related to warning a professional that their behavior posed a risk to the patient, managing their possible bad reaction, and offering support to a colleague who becomes a second victim. The mentors' proposals for improvement of this competency gap referred to training in communication skills and patient safety, safety culture, work climate, individual attitudes, a reference person for trainees, formal incorporation into the curricula of health care degrees and specialization pathways, specific systems and mechanisms to give trainees a voice, institutional risk management, regulations, guidelines and standards, supervision, and resources to support trainees. In terms of teaching methodology, the mentors recommended innovative strategies, many of them based on technological tools or solutions, including videos, seminars, lectures, workshops, simulation learning or role-playing with or without professional actors, case studies, videos with practical demonstrations or model situations, panel discussions, clinical sessions for joint analysis of patient safety incidents, and debriefings to set and discuss lessons learned.</p><p><strong>Conclusions: </strong>This study sought to promote psychological safety competencies as a formal part of the training of future health care professionals, facilitating the translation of international guidelines into practice ","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11494257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142381843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>The creation of large language models (LLMs) such as ChatGPT is an important step in the development of artificial intelligence, which shows great potential in medical education due to its powerful language understanding and generative capabilities. The purpose of this study was to quantitatively evaluate and comprehensively analyze ChatGPT's performance in handling questions for the National Nursing Licensure Examination (NNLE) in China and the United States, including the National Council Licensure Examination for Registered Nurses (NCLEX-RN) and the NNLE.</p><p><strong>Objective: </strong>This study aims to examine how well LLMs respond to the NCLEX-RN and the NNLE multiple-choice questions (MCQs) in various language inputs. To evaluate whether LLMs can be used as multilingual learning assistance for nursing, and to assess whether they possess a repository of professional knowledge applicable to clinical nursing practice.</p><p><strong>Methods: </strong>First, we compiled 150 NCLEX-RN Practical MCQs, 240 NNLE Theoretical MCQs, and 240 NNLE Practical MCQs. Then, the translation function of ChatGPT 3.5 was used to translate NCLEX-RN questions from English to Chinese and NNLE questions from Chinese to English. Finally, the original version and the translated version of the MCQs were inputted into ChatGPT 4.0, ChatGPT 3.5, and Google Bard. Different LLMs were compared according to the accuracy rate, and the differences between different language inputs were compared.</p><p><strong>Results: </strong>The accuracy rates of ChatGPT 4.0 for NCLEX-RN practical questions and Chinese-translated NCLEX-RN practical questions were 88.7% (133/150) and 79.3% (119/150), respectively. Despite the statistical significance of the difference (P=.03), the correct rate was generally satisfactory. Around 71.9% (169/235) of NNLE Theoretical MCQs and 69.1% (161/233) of NNLE Practical MCQs were correctly answered by ChatGPT 4.0. The accuracy of ChatGPT 4.0 in processing NNLE Theoretical MCQs and NNLE Practical MCQs translated into English was 71.5% (168/235; P=.92) and 67.8% (158/233; P=.77), respectively, and there was no statistically significant difference between the results of text input in different languages. ChatGPT 3.5 (NCLEX-RN P=.003, NNLE Theoretical P<.001, NNLE Practical P=.12) and Google Bard (NCLEX-RN P<.001, NNLE Theoretical P<.001, NNLE Practical P<.001) had lower accuracy rates for nursing-related MCQs than ChatGPT 4.0 in English input. English accuracy was higher when compared with ChatGPT 3.5's Chinese input, and the difference was statistically significant (NCLEX-RN P=.02, NNLE Practical P=.02). Whether submitted in Chinese or English, the MCQs from the NCLEX-RN and NNLE demonstrated that ChatGPT 4.0 had the highest number of unique correct responses and the lowest number of unique incorrect responses among the 3 LLMs.</p><p><strong>Conclusions: </strong>This study, focusing on 618 nursing MCQs including NCLEX-RN and
{"title":"Performance of ChatGPT on Nursing Licensure Examinations in the United States and China: Cross-Sectional Study.","authors":"Zelin Wu, Wenyi Gan, Zhaowen Xue, Zhengxin Ni, Xiaofei Zheng, Yiyi Zhang","doi":"10.2196/52746","DOIUrl":"10.2196/52746","url":null,"abstract":"<p><strong>Background: </strong>The creation of large language models (LLMs) such as ChatGPT is an important step in the development of artificial intelligence, which shows great potential in medical education due to its powerful language understanding and generative capabilities. The purpose of this study was to quantitatively evaluate and comprehensively analyze ChatGPT's performance in handling questions for the National Nursing Licensure Examination (NNLE) in China and the United States, including the National Council Licensure Examination for Registered Nurses (NCLEX-RN) and the NNLE.</p><p><strong>Objective: </strong>This study aims to examine how well LLMs respond to the NCLEX-RN and the NNLE multiple-choice questions (MCQs) in various language inputs. To evaluate whether LLMs can be used as multilingual learning assistance for nursing, and to assess whether they possess a repository of professional knowledge applicable to clinical nursing practice.</p><p><strong>Methods: </strong>First, we compiled 150 NCLEX-RN Practical MCQs, 240 NNLE Theoretical MCQs, and 240 NNLE Practical MCQs. Then, the translation function of ChatGPT 3.5 was used to translate NCLEX-RN questions from English to Chinese and NNLE questions from Chinese to English. Finally, the original version and the translated version of the MCQs were inputted into ChatGPT 4.0, ChatGPT 3.5, and Google Bard. Different LLMs were compared according to the accuracy rate, and the differences between different language inputs were compared.</p><p><strong>Results: </strong>The accuracy rates of ChatGPT 4.0 for NCLEX-RN practical questions and Chinese-translated NCLEX-RN practical questions were 88.7% (133/150) and 79.3% (119/150), respectively. Despite the statistical significance of the difference (P=.03), the correct rate was generally satisfactory. Around 71.9% (169/235) of NNLE Theoretical MCQs and 69.1% (161/233) of NNLE Practical MCQs were correctly answered by ChatGPT 4.0. The accuracy of ChatGPT 4.0 in processing NNLE Theoretical MCQs and NNLE Practical MCQs translated into English was 71.5% (168/235; P=.92) and 67.8% (158/233; P=.77), respectively, and there was no statistically significant difference between the results of text input in different languages. ChatGPT 3.5 (NCLEX-RN P=.003, NNLE Theoretical P<.001, NNLE Practical P=.12) and Google Bard (NCLEX-RN P<.001, NNLE Theoretical P<.001, NNLE Practical P<.001) had lower accuracy rates for nursing-related MCQs than ChatGPT 4.0 in English input. English accuracy was higher when compared with ChatGPT 3.5's Chinese input, and the difference was statistically significant (NCLEX-RN P=.02, NNLE Practical P=.02). Whether submitted in Chinese or English, the MCQs from the NCLEX-RN and NNLE demonstrated that ChatGPT 4.0 had the highest number of unique correct responses and the lowest number of unique incorrect responses among the 3 LLMs.</p><p><strong>Conclusions: </strong>This study, focusing on 618 nursing MCQs including NCLEX-RN and ","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Limited digital literacy is a barrier for vulnerable patients accessing health care.
Objective: The Stanford Technology Access Resource Team (START), a service-learning course created to bridge the telehealth digital divide, trained undergraduate and graduate students to provide hands-on patient support to improve access to electronic medical records (EMRs) and video visits while learning about social determinants of health.
Methods: START students reached out to 1185 patients (n=711, 60% from primary care clinics of a large academic medical center and n=474, 40% from a federally qualified health center). Registries consisted of patients without an EMR account (at primary care clinics) or patients with a scheduled telehealth visit (at a federally qualified health center). Patient outcomes were evaluated by successful EMR enrollments and video visit setups. Student outcomes were assessed by reflections coded for thematic content.
Results: Over 6 academic quarters, 57 students reached out to 1185 registry patients. Of the 229 patients contacted, 141 desired technical support. START students successfully established EMR accounts and set up video visits for 78.7% (111/141) of patients. After program completion, we reached out to 13.5% (19/141) of patients to collect perspectives on program utility. The majority (18/19, 94.7%) reported that START students were helpful, and 73.7% (14/19) reported that they had successfully connected with their health care provider in a digital visit. Inability to establish access included a lack of Wi-Fi or device access, the absence of an interpreter, and a disability that precluded the use of video visits. Qualitative analysis of student reflections showed an impact on future career goals and improved awareness of health disparities of technology access.
Conclusions: Of the patients who desired telehealth access, START improved access for 78.7% (111/141) of patients. Students found that START broadened their understanding of health disparities and social determinants of health and influenced their future career goals.
{"title":"Bridging the Telehealth Digital Divide With Collegiate Navigators: Mixed Methods Evaluation Study of a Service-Learning Health Disparities Course.","authors":"Zakaria Nadeem Doueiri, Rika Bajra, Malathi Srinivasan, Erika Schillinger, Nancy Cuan","doi":"10.2196/57077","DOIUrl":"10.2196/57077","url":null,"abstract":"<p><strong>Background: </strong>Limited digital literacy is a barrier for vulnerable patients accessing health care.</p><p><strong>Objective: </strong>The Stanford Technology Access Resource Team (START), a service-learning course created to bridge the telehealth digital divide, trained undergraduate and graduate students to provide hands-on patient support to improve access to electronic medical records (EMRs) and video visits while learning about social determinants of health.</p><p><strong>Methods: </strong>START students reached out to 1185 patients (n=711, 60% from primary care clinics of a large academic medical center and n=474, 40% from a federally qualified health center). Registries consisted of patients without an EMR account (at primary care clinics) or patients with a scheduled telehealth visit (at a federally qualified health center). Patient outcomes were evaluated by successful EMR enrollments and video visit setups. Student outcomes were assessed by reflections coded for thematic content.</p><p><strong>Results: </strong>Over 6 academic quarters, 57 students reached out to 1185 registry patients. Of the 229 patients contacted, 141 desired technical support. START students successfully established EMR accounts and set up video visits for 78.7% (111/141) of patients. After program completion, we reached out to 13.5% (19/141) of patients to collect perspectives on program utility. The majority (18/19, 94.7%) reported that START students were helpful, and 73.7% (14/19) reported that they had successfully connected with their health care provider in a digital visit. Inability to establish access included a lack of Wi-Fi or device access, the absence of an interpreter, and a disability that precluded the use of video visits. Qualitative analysis of student reflections showed an impact on future career goals and improved awareness of health disparities of technology access.</p><p><strong>Conclusions: </strong>Of the patients who desired telehealth access, START improved access for 78.7% (111/141) of patients. Students found that START broadened their understanding of health disparities and social determinants of health and influenced their future career goals.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The Objective Structured Clinical Examination (OSCE) is a pivotal tool for assessing health care professionals and plays an integral role in medical education.
Objective: This study aims to map the bibliometric landscape of OSCE research, highlighting trends and key influencers.
Methods: A comprehensive literature search was conducted for materials related to OSCE from January 2004 to December 2023, using the Web of Science Core Collection database. Bibliometric analysis and visualization were performed with VOSviewer and CiteSpace software tools.
Results: Our analysis indicates a consistent increase in OSCE-related publications over the study period, with a notable surge after 2019, culminating in a peak of activity in 2021. The United States emerged as a significant contributor, responsible for 30.86% (1626/5268) of total publications and amassing 44,051 citations. Coauthorship network analysis highlighted robust collaborations, particularly between the United States and the United Kingdom. Leading journals in this domain-BMC Medical Education, Medical Education, Academic Medicine, and Medical Teacher-featured the highest volume of papers, while The Lancet garnered substantial citations, reflecting its high impact factor (to be verified for accuracy). Prominent authors in the field include Sondra Zabar, Debra Pugh, Timothy J Wood, and Susan Humphrey-Murto, with Ronaldo M Harden, Brian D Hodges, and George E Miller being the most cited. The analysis of key research terms revealed a focus on "education," "performance," "competence," and "skills," indicating these are central themes in OSCE research.
Conclusions: The study underscores a dynamic expansion in OSCE research and international collaboration, spotlighting influential countries, institutions, authors, and journals. These elements are instrumental in steering the evolution of medical education assessment practices and suggest a trajectory for future research endeavors. Future work should consider the implications of these findings for medical education and the potential areas for further investigation, particularly in underrepresented regions or emerging competencies in health care training.
背景:客观结构化临床考试(OSCE)是评估医疗保健专业人员的重要工具,在医学教育中发挥着不可或缺的作用:本研究旨在绘制 OSCE 研究的文献计量图,突出研究趋势和主要影响因素:方法:使用 Web of Science Core Collection 数据库对 2004 年 1 月至 2023 年 12 月期间与 OSCE 相关的资料进行了全面的文献检索。使用 VOSviewer 和 CiteSpace 软件工具进行了文献计量分析和可视化:我们的分析表明,在研究期间,与欧安组织相关的出版物持续增加,2019 年后明显激增,2021 年达到活动高峰。美国是重要的贡献者,发表的论文占论文总数的30.86%(1626/5268),引用次数达44051次。合作网络分析凸显了强大的合作关系,尤其是美国和英国之间的合作。该领域的主要期刊--《BMC Medical Education》、《Medical Education》、《Academic Medicine》和《Medical Teacher》--发表的论文数量最多,而《柳叶刀》则获得了大量引用,反映出其影响因子较高(准确性有待核实)。该领域的著名作者包括 Sondra Zabar、Debra Pugh、Timothy J Wood 和 Susan Humphrey-Murto,其中 Ronaldo M Harden、Brian D Hodges 和 George E Miller 的论文被引用次数最多。对关键研究术语的分析表明,研究重点是 "教育"、"表现"、"能力 "和 "技能",这表明这些是欧安组织研究的核心主题:本研究强调了欧安组织研究和国际合作的动态扩展,突出了有影响力的国家、机构、作者和期刊。这些因素有助于引导医学教育评估实践的发展,并为未来的研究工作指明了方向。未来的工作应考虑这些发现对医学教育的影响以及进一步研究的潜在领域,尤其是在代表性不足的地区或医疗培训的新兴能力方面。
{"title":"Knowledge Mapping and Global Trends in the Field of the Objective Structured Clinical Examination: Bibliometric and Visual Analysis (2004-2023).","authors":"Hongjun Ba, Lili Zhang, Xiufang He, Shujuan Li","doi":"10.2196/57772","DOIUrl":"10.2196/57772","url":null,"abstract":"<p><strong>Background: </strong>The Objective Structured Clinical Examination (OSCE) is a pivotal tool for assessing health care professionals and plays an integral role in medical education.</p><p><strong>Objective: </strong>This study aims to map the bibliometric landscape of OSCE research, highlighting trends and key influencers.</p><p><strong>Methods: </strong>A comprehensive literature search was conducted for materials related to OSCE from January 2004 to December 2023, using the Web of Science Core Collection database. Bibliometric analysis and visualization were performed with VOSviewer and CiteSpace software tools.</p><p><strong>Results: </strong>Our analysis indicates a consistent increase in OSCE-related publications over the study period, with a notable surge after 2019, culminating in a peak of activity in 2021. The United States emerged as a significant contributor, responsible for 30.86% (1626/5268) of total publications and amassing 44,051 citations. Coauthorship network analysis highlighted robust collaborations, particularly between the United States and the United Kingdom. Leading journals in this domain-BMC Medical Education, Medical Education, Academic Medicine, and Medical Teacher-featured the highest volume of papers, while The Lancet garnered substantial citations, reflecting its high impact factor (to be verified for accuracy). Prominent authors in the field include Sondra Zabar, Debra Pugh, Timothy J Wood, and Susan Humphrey-Murto, with Ronaldo M Harden, Brian D Hodges, and George E Miller being the most cited. The analysis of key research terms revealed a focus on \"education,\" \"performance,\" \"competence,\" and \"skills,\" indicating these are central themes in OSCE research.</p><p><strong>Conclusions: </strong>The study underscores a dynamic expansion in OSCE research and international collaboration, spotlighting influential countries, institutions, authors, and journals. These elements are instrumental in steering the evolution of medical education assessment practices and suggest a trajectory for future research endeavors. Future work should consider the implications of these findings for medical education and the potential areas for further investigation, particularly in underrepresented regions or emerging competencies in health care training.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}