Pub Date : 2024-12-01Epub Date: 2022-12-21DOI: 10.1097/CEH.0000000000000471
Farhan Saeed Vakani, Kerry Uebel, Chinthaka Balasooriya, Apo Demirkol
Introduction: Continuing medical education is a process of continuous learning to maintain physicians' competence and professional performance. Efforts to make continuing medical education (CME) programs mandatory in the South-East Asia Region by linking credits to the renewal of registration have met with mixed success. However, there are no recent reviews on the CME status in regions with a large number of developing countries. This review aims to map the practices and regulation of the CME activities in the South-East Asia and Eastern Mediterranean regions.
Methods: A scoping review was undertaken using a modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews checklist. A search was conducted within PubMed, Embase, Web of Science, Scopus databases, and national medical and health council websites.
Results: Evidence on the provision of CME is available for all but seven of the 33 countries in both regions. Fourteen countries of varying income levels have implemented mandatory CME linked to the renewal of registration. They have statutory bodies governing CME and allocating credits, with most requiring a large number of hourly based activities for the renewal of registration and evidence of a wide range of local providers.
Discussion: Financial resources, a thorough organizational structure and standards, and a wide range of local CME providers seem to promote the implementation of mandatory CME in most of these countries.
{"title":"The Status Quo of Continuing Medical Education in South-East Asia and Eastern Mediterranean Regions: A Scoping Review of 33 Countries.","authors":"Farhan Saeed Vakani, Kerry Uebel, Chinthaka Balasooriya, Apo Demirkol","doi":"10.1097/CEH.0000000000000471","DOIUrl":"10.1097/CEH.0000000000000471","url":null,"abstract":"<p><strong>Introduction: </strong>Continuing medical education is a process of continuous learning to maintain physicians' competence and professional performance. Efforts to make continuing medical education (CME) programs mandatory in the South-East Asia Region by linking credits to the renewal of registration have met with mixed success. However, there are no recent reviews on the CME status in regions with a large number of developing countries. This review aims to map the practices and regulation of the CME activities in the South-East Asia and Eastern Mediterranean regions.</p><p><strong>Methods: </strong>A scoping review was undertaken using a modified Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews checklist. A search was conducted within PubMed, Embase, Web of Science, Scopus databases, and national medical and health council websites.</p><p><strong>Results: </strong>Evidence on the provision of CME is available for all but seven of the 33 countries in both regions. Fourteen countries of varying income levels have implemented mandatory CME linked to the renewal of registration. They have statutory bodies governing CME and allocating credits, with most requiring a large number of hourly based activities for the renewal of registration and evidence of a wide range of local providers.</p><p><strong>Discussion: </strong>Financial resources, a thorough organizational structure and standards, and a wide range of local CME providers seem to promote the implementation of mandatory CME in most of these countries.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"44-52"},"PeriodicalIF":1.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10602523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2023-06-08DOI: 10.1097/CEH.0000000000000499
Marlene Taube-Schiff, Persephone Larkin, Eugenia Fibiger, Elizabeth Lin, David Wiljer, Sanjeev Sockalingam
Introduction: Quality improvement (QI) programming attempts to bridge the gap between patient care and standards of care. Mentorship could be a means through which QI is fostered, developed, and incorporated into continuing professional development (CPD) programs. The current study examined (1) models of implementation for mentorship within the Department of Psychiatry of a large Canadian academic center; (2) mentorship as a potential vehicle for alignment of QI practices and CPD; and (3) needs for the implementation of QI and CPD mentorship programs.
Methods: Qualitative interviews were conducted with 14 individuals associated with the university's Department of Psychiatry. The data were analyzed through thematic analyses with two independent coders using COREQ guidelines.
Results: Our results identified uncertainty among the participants regarding the conceptualization of QI and CPD, illustrating difficulties in determining whether mentorship could be used to align these practices. Three major themes were identified in our analyses: sharing of QI work through communities of practices; the need for organizational support; and relational experiences of QI mentoring.
Discussion: A greater understanding of QI is necessary before psychiatry departments can implement mentorship to enhance QI practices. However, models of mentorship and needs for mentorship have been made clear and include a good mentorship fit, organizational support, and opportunities for both formal and informal mentorship. Changing organizational culture and providing appropriate training is necessary for enhancing QI.
{"title":"Understanding Quality Improvement and Continuing Professional Mentorship: A Needs Assessment Study to Inform the Development of a Community of Practice.","authors":"Marlene Taube-Schiff, Persephone Larkin, Eugenia Fibiger, Elizabeth Lin, David Wiljer, Sanjeev Sockalingam","doi":"10.1097/CEH.0000000000000499","DOIUrl":"10.1097/CEH.0000000000000499","url":null,"abstract":"<p><strong>Introduction: </strong>Quality improvement (QI) programming attempts to bridge the gap between patient care and standards of care. Mentorship could be a means through which QI is fostered, developed, and incorporated into continuing professional development (CPD) programs. The current study examined (1) models of implementation for mentorship within the Department of Psychiatry of a large Canadian academic center; (2) mentorship as a potential vehicle for alignment of QI practices and CPD; and (3) needs for the implementation of QI and CPD mentorship programs.</p><p><strong>Methods: </strong>Qualitative interviews were conducted with 14 individuals associated with the university's Department of Psychiatry. The data were analyzed through thematic analyses with two independent coders using COREQ guidelines.</p><p><strong>Results: </strong>Our results identified uncertainty among the participants regarding the conceptualization of QI and CPD, illustrating difficulties in determining whether mentorship could be used to align these practices. Three major themes were identified in our analyses: sharing of QI work through communities of practices; the need for organizational support; and relational experiences of QI mentoring.</p><p><strong>Discussion: </strong>A greater understanding of QI is necessary before psychiatry departments can implement mentorship to enhance QI practices. However, models of mentorship and needs for mentorship have been made clear and include a good mentorship fit, organizational support, and opportunities for both formal and informal mentorship. Changing organizational culture and providing appropriate training is necessary for enhancing QI.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"11-17"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10043426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2023-05-04DOI: 10.1097/CEH.0000000000000503
Patricia A Parker, Jessica Staley, William E Rosa, Richard Weiner, Smita C Banerjee
Introduction: Effective communication among members of health care teams is essential to provide quality and patient-centered care, yet many people identify this as a challenge. We developed, implemented, and conducted a preliminary evaluation of a training to enhance communication within oncology teams.
Methods: This training identifies key strategies, communication skills, and process tasks recommended to achieve the goal of using a collaborative approach to navigate communication interactions across members of the hospital team to enhance patient care outcomes and increase team effectiveness. Forty-six advanced practice providers (APPs) participated and completed an evaluation of the module.
Results: Eighty-three percent of participants identified as female and 61% were White. Eighty-three percent of participants were nurse practitioners and 17% were physician assistants. The module was highly rated. Participants responded that they were satisfied ("agree" or "strongly agree") on 16 of 17 evaluation items (80% or higher).
Discussion: APPs were satisfied with the course and found many aspects useful in learning and practicing skills to improve their communication with other team members to enhance their care of patients. Training with this module and other communication approaches are needed for health care professionals of all types to encourage more consistent and meaningful communication with their colleagues to improve patient care.
{"title":"Development of a Communication Skills Training to Enhance Effective Team Communication in Oncology.","authors":"Patricia A Parker, Jessica Staley, William E Rosa, Richard Weiner, Smita C Banerjee","doi":"10.1097/CEH.0000000000000503","DOIUrl":"10.1097/CEH.0000000000000503","url":null,"abstract":"<p><strong>Introduction: </strong>Effective communication among members of health care teams is essential to provide quality and patient-centered care, yet many people identify this as a challenge. We developed, implemented, and conducted a preliminary evaluation of a training to enhance communication within oncology teams.</p><p><strong>Methods: </strong>This training identifies key strategies, communication skills, and process tasks recommended to achieve the goal of using a collaborative approach to navigate communication interactions across members of the hospital team to enhance patient care outcomes and increase team effectiveness. Forty-six advanced practice providers (APPs) participated and completed an evaluation of the module.</p><p><strong>Results: </strong>Eighty-three percent of participants identified as female and 61% were White. Eighty-three percent of participants were nurse practitioners and 17% were physician assistants. The module was highly rated. Participants responded that they were satisfied (\"agree\" or \"strongly agree\") on 16 of 17 evaluation items (80% or higher).</p><p><strong>Discussion: </strong>APPs were satisfied with the course and found many aspects useful in learning and practicing skills to improve their communication with other team members to enhance their care of patients. Training with this module and other communication approaches are needed for health care professionals of all types to encourage more consistent and meaningful communication with their colleagues to improve patient care.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"71-74"},"PeriodicalIF":1.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10624640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9410863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2023-04-17DOI: 10.1097/CEH.0000000000000500
Tharshini Jeyakumar, Inaara Karsan, Betsy Williams, Joyce Fried, Gabrielle Kane, Sharon Ambata-Villanueva, Ashleigh Bennett, Graham T McMahon, Morag Paton, Nathaniel Williams, Sarah Younus, David Wiljer
Abstract: Continuing professional development (CPD) fosters lifelong learning and enables health care providers to keep their knowledge and skills current with rapidly evolving health care practices. Instructional methods promoting critical thinking and decision making contribute to effective CPD interventions. The delivery methods influence the uptake of content and the resulting changes in knowledge, skills, attitudes, and behavior. Educational approaches are needed to ensure that CPD meets the changing needs of health care providers. This article examines the development approach and key recommendations embedded in a CE Educator's toolkit created to evolve CPD practice and foster a learning experience that promotes self-awareness, self-reflection, competency, and behavioral change. The Knowledge-to-Action framework was used in designing the toolkit. The toolkit highlighted three intervention formats: facilitation of small group learning, case-based learning, and reflective learning. Strategies and guidelines to promote active learning principles in CPD activities within different modalities and learning contexts were included. The goal of the toolkit is to assist CPD providers to design educational activities that optimally support health care providers' self-reflection and knowledge translation into their clinical environment and contribute to practice improvement, thus achieving the outcomes of the quintuple aim.
{"title":"Paving the Way Forward for Evidence-Based Continuing Professional Development.","authors":"Tharshini Jeyakumar, Inaara Karsan, Betsy Williams, Joyce Fried, Gabrielle Kane, Sharon Ambata-Villanueva, Ashleigh Bennett, Graham T McMahon, Morag Paton, Nathaniel Williams, Sarah Younus, David Wiljer","doi":"10.1097/CEH.0000000000000500","DOIUrl":"10.1097/CEH.0000000000000500","url":null,"abstract":"<p><strong>Abstract: </strong>Continuing professional development (CPD) fosters lifelong learning and enables health care providers to keep their knowledge and skills current with rapidly evolving health care practices. Instructional methods promoting critical thinking and decision making contribute to effective CPD interventions. The delivery methods influence the uptake of content and the resulting changes in knowledge, skills, attitudes, and behavior. Educational approaches are needed to ensure that CPD meets the changing needs of health care providers. This article examines the development approach and key recommendations embedded in a CE Educator's toolkit created to evolve CPD practice and foster a learning experience that promotes self-awareness, self-reflection, competency, and behavioral change. The Knowledge-to-Action framework was used in designing the toolkit. The toolkit highlighted three intervention formats: facilitation of small group learning, case-based learning, and reflective learning. Strategies and guidelines to promote active learning principles in CPD activities within different modalities and learning contexts were included. The goal of the toolkit is to assist CPD providers to design educational activities that optimally support health care providers' self-reflection and knowledge translation into their clinical environment and contribute to practice improvement, thus achieving the outcomes of the quintuple aim.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"53-57"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9378683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: The Baylor International Pediatric AIDS Initiative (BIPAI) Network supports a network of independent nongovernmental organizations providing health care for children and families in low- and middle-income countries (LMIC). Using a community of practice (CoP) framework, a continuing professional development (CPD) program was created for health professionals to enhance knowledge and exchange best practices.
Methods: An online learning platform (Moodle), videoconferencing (Zoom), instant messaging systems (Whatsapp), and email listserv facilitated learning and interaction between program participants. Target participants initially included pharmacy staff and expanded to include other health professionals. Learning modules included asynchronous assignments and review of materials, live discussion sessions, and module pretests and posttests. Evaluation included participants' activities, changes in knowledge, and assignment completion. Participants provided feedback on program quality via surveys and interviews.
Results: Five of 11 participants in Year 1 earned a certificate of completion, and 17 of 45 participants earned a certificate in Year 2. Most modules showed an increase in module pretest and posttest scores. Ninety-seven percent of participants indicated that the relevance and usefulness of modules were good or outstanding. Ongoing evaluation indicated changes in Year 2 for program improvement, and notable outcomes indicated how CoP added value in developing a true community.
Discussion: Using a CoP framework allowed participants to improve their personal knowledge and become part of a learning community and network of interdisciplinary health care professionals. Lessons learned included expanding program evaluation to capture potential value creation of the community of practice in addition to individual-level development; providing briefer, more focused programs to better serve busy working professionals; and optimizing use of technological platforms to improve participant engagement.
{"title":"Evolution of a Continuing Professional Development Program Based on a Community of Practice Model for Health Care Professionals in Resource-Limited Settings.","authors":"Diane Nguyen, Kris Denzel Tupas, Satid Thammasitboon","doi":"10.1097/CEH.0000000000000505","DOIUrl":"10.1097/CEH.0000000000000505","url":null,"abstract":"<p><strong>Introduction: </strong>The Baylor International Pediatric AIDS Initiative (BIPAI) Network supports a network of independent nongovernmental organizations providing health care for children and families in low- and middle-income countries (LMIC). Using a community of practice (CoP) framework, a continuing professional development (CPD) program was created for health professionals to enhance knowledge and exchange best practices.</p><p><strong>Methods: </strong>An online learning platform (Moodle), videoconferencing (Zoom), instant messaging systems (Whatsapp), and email listserv facilitated learning and interaction between program participants. Target participants initially included pharmacy staff and expanded to include other health professionals. Learning modules included asynchronous assignments and review of materials, live discussion sessions, and module pretests and posttests. Evaluation included participants' activities, changes in knowledge, and assignment completion. Participants provided feedback on program quality via surveys and interviews.</p><p><strong>Results: </strong>Five of 11 participants in Year 1 earned a certificate of completion, and 17 of 45 participants earned a certificate in Year 2. Most modules showed an increase in module pretest and posttest scores. Ninety-seven percent of participants indicated that the relevance and usefulness of modules were good or outstanding. Ongoing evaluation indicated changes in Year 2 for program improvement, and notable outcomes indicated how CoP added value in developing a true community.</p><p><strong>Discussion: </strong>Using a CoP framework allowed participants to improve their personal knowledge and become part of a learning community and network of interdisciplinary health care professionals. Lessons learned included expanding program evaluation to capture potential value creation of the community of practice in addition to individual-level development; providing briefer, more focused programs to better serve busy working professionals; and optimizing use of technological platforms to improve participant engagement.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"58-63"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9410865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-12-21DOI: 10.1097/CEH.0000000000000483
Vernon Curran, Lisa Fleet, Cynthia Whitton
Introduction: Reflective practice involves thinking about one's practice and often involves using data to effect such reflection. Multisource feedback (MSF) involves evaluation by peers, patients, and coworkers. Coaching has been identified as a key aspect of MSF with peer coaching involving two or more colleagues working together to reflect on current practices and share ideas. We introduced a pilot MSF and peer coaching program with a goal to evaluate its effect on fostering reflective practice.
Methods: Physician participants completed a 360-degree assessment of their practices, followed by peer coaching sessions. Peer coaches were oriented to an evidence-based theory-driven feedback model (R2C2) to support coaching skills development. A mixed-methods evaluation study was undertaken, including pre to post surveys of readiness for self-directed learning, a postevaluation survey of participant satisfaction, and semistructured participant interviews.
Results: Thirty four (N = 34) participants completed the 360-degree assessment, and 22 participants took part in two coaching meetings. Respondents reported significant improvement to aspects of their readiness for self-directed learning ( P <.05), including knowing about learning strategies to achieve key learning goals, knowing about resources to support one's own learning, and being able to evaluate one's learning outcomes. Overall, respondents felt empowered to "reflect" on their practices, affirm what they were doing well, and, for some, identify opportunities for further and ongoing professional development.
Discussion: MSF and peer coaching emerged as key elements in enabling reflective practice by facilitating reflection on one's practice and conversations with one's peers to affirm strengths and opportunities for strengthening practice through self-directed professional development.
{"title":"Fostering \"Reflection-On-Practice\" Through a Multisource Feedback and Peer Coaching Pilot Program.","authors":"Vernon Curran, Lisa Fleet, Cynthia Whitton","doi":"10.1097/CEH.0000000000000483","DOIUrl":"10.1097/CEH.0000000000000483","url":null,"abstract":"<p><strong>Introduction: </strong>Reflective practice involves thinking about one's practice and often involves using data to effect such reflection. Multisource feedback (MSF) involves evaluation by peers, patients, and coworkers. Coaching has been identified as a key aspect of MSF with peer coaching involving two or more colleagues working together to reflect on current practices and share ideas. We introduced a pilot MSF and peer coaching program with a goal to evaluate its effect on fostering reflective practice.</p><p><strong>Methods: </strong>Physician participants completed a 360-degree assessment of their practices, followed by peer coaching sessions. Peer coaches were oriented to an evidence-based theory-driven feedback model (R2C2) to support coaching skills development. A mixed-methods evaluation study was undertaken, including pre to post surveys of readiness for self-directed learning, a postevaluation survey of participant satisfaction, and semistructured participant interviews.</p><p><strong>Results: </strong>Thirty four (N = 34) participants completed the 360-degree assessment, and 22 participants took part in two coaching meetings. Respondents reported significant improvement to aspects of their readiness for self-directed learning ( P <.05), including knowing about learning strategies to achieve key learning goals, knowing about resources to support one's own learning, and being able to evaluate one's learning outcomes. Overall, respondents felt empowered to \"reflect\" on their practices, affirm what they were doing well, and, for some, identify opportunities for further and ongoing professional development.</p><p><strong>Discussion: </strong>MSF and peer coaching emerged as key elements in enabling reflective practice by facilitating reflection on one's practice and conversations with one's peers to affirm strengths and opportunities for strengthening practice through self-directed professional development.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"64-70"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10607638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-12-21DOI: 10.1097/CEH.0000000000000481
Stacey Bregman, Elana Thau, Martin Pusic, Manuela Perez, Kathy Boutis
Introduction: There is limited knowledge on pediatric chest radiograph (pCXR) interpretation skill among practicing physicians. We systematically determined baseline interpretation skill, the number of pCXR cases physicians required complete to achieve a performance benchmark, and which diagnoses posed the greatest diagnostic challenge.
Methods: Physicians interpreted 434 pCXR cases via a web-based platform until they achieved a performance benchmark of 85% accuracy, sensitivity, and specificity. Interpretation difficulty scores for each case were derived by applying one-parameter item response theory to participant data. We compared interpretation difficulty scores across diagnostic categories and described the diagnoses of the 30% most difficult-to-interpret cases.
Results: 240 physicians who practice in one of three geographic areas interpreted cases, yielding 56,833 pCXR case interpretations. The initial diagnostic performance (first 50 cases) of our participants demonstrated an accuracy of 68.9%, sensitivity of 69.4%, and a specificity of 68.4%. The median number of cases completed to achieve the performance benchmark was 102 (interquartile range 69, 176; min, max, 54, 431). Among the 30% most difficult-to-interpret cases, 39.2% were normal pCXR and 32.3% were cases of lobar pneumonia. Cases with a single trauma-related imaging finding, cardiac, hilar, and diaphragmatic pathologies were also among the most challenging.
Discussion: At baseline, practicing physicians misdiagnosed about one-third of pCXR and there was up to an eight-fold difference between participants in number of cases completed to achieve the standardized performance benchmark. We also identified the diagnoses with the greatest potential for educational intervention.
{"title":"A Performance-Based Competency Assessment of Pediatric Chest Radiograph Interpretation Among Practicing Physicians.","authors":"Stacey Bregman, Elana Thau, Martin Pusic, Manuela Perez, Kathy Boutis","doi":"10.1097/CEH.0000000000000481","DOIUrl":"10.1097/CEH.0000000000000481","url":null,"abstract":"<p><strong>Introduction: </strong>There is limited knowledge on pediatric chest radiograph (pCXR) interpretation skill among practicing physicians. We systematically determined baseline interpretation skill, the number of pCXR cases physicians required complete to achieve a performance benchmark, and which diagnoses posed the greatest diagnostic challenge.</p><p><strong>Methods: </strong>Physicians interpreted 434 pCXR cases via a web-based platform until they achieved a performance benchmark of 85% accuracy, sensitivity, and specificity. Interpretation difficulty scores for each case were derived by applying one-parameter item response theory to participant data. We compared interpretation difficulty scores across diagnostic categories and described the diagnoses of the 30% most difficult-to-interpret cases.</p><p><strong>Results: </strong>240 physicians who practice in one of three geographic areas interpreted cases, yielding 56,833 pCXR case interpretations. The initial diagnostic performance (first 50 cases) of our participants demonstrated an accuracy of 68.9%, sensitivity of 69.4%, and a specificity of 68.4%. The median number of cases completed to achieve the performance benchmark was 102 (interquartile range 69, 176; min, max, 54, 431). Among the 30% most difficult-to-interpret cases, 39.2% were normal pCXR and 32.3% were cases of lobar pneumonia. Cases with a single trauma-related imaging finding, cardiac, hilar, and diaphragmatic pathologies were also among the most challenging.</p><p><strong>Discussion: </strong>At baseline, practicing physicians misdiagnosed about one-third of pCXR and there was up to an eight-fold difference between participants in number of cases completed to achieve the standardized performance benchmark. We also identified the diagnoses with the greatest potential for educational intervention.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"28-34"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10607633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2023-03-07DOI: 10.1097/CEH.0000000000000487
David W Price, Ting Wang, Thomas R O'Neill, Andrew Bazemore, Warren P Newton
Introduction: Evidence links assessment to optimal learning, affirming that physicians are more likely to study, learn, and practice skills when some form of consequence ("stakes") may result from an assessment. We lack evidence, however, on how physicians' confidence in their knowledge relates to performance on assessments, and whether this varies based on the stakes of the assessment.
Methods: Our retrospective repeated-measures design compared differences in patterns of physician answer accuracy and answer confidence among physicians participating in both a high-stakes and a low-stakes longitudinal assessment of the American Board of Family Medicine.
Results: After 1 and 2 years, participants were more often correct but less confident in their accuracy on a higher-stakes longitudinal knowledge assessment compared with a lower-stakes assessment. There were no differences in question difficulty between the two platforms. Variation existed between platforms in time spent answering questions, use of resources to answer questions, and perceived question relevance to practice.
Discussion: This novel study of physician certification suggests that the accuracy of physician performance increases with higher stakes, even as self-reported confidence in their knowledge declines. It suggests that physicians may be more engaged in higher-stakes compared with lower-stakes assessments. With medical knowledge growing exponentially, these analyses provide an example of the complementary roles of higher- and lower-stakes knowledge assessment in supporting physician learning during continuing specialty board certification.
{"title":"Differences in Physician Performance and Self-rated Confidence on High- and Low-Stakes Knowledge Assessments in Board Certification.","authors":"David W Price, Ting Wang, Thomas R O'Neill, Andrew Bazemore, Warren P Newton","doi":"10.1097/CEH.0000000000000487","DOIUrl":"10.1097/CEH.0000000000000487","url":null,"abstract":"<p><strong>Introduction: </strong>Evidence links assessment to optimal learning, affirming that physicians are more likely to study, learn, and practice skills when some form of consequence (\"stakes\") may result from an assessment. We lack evidence, however, on how physicians' confidence in their knowledge relates to performance on assessments, and whether this varies based on the stakes of the assessment.</p><p><strong>Methods: </strong>Our retrospective repeated-measures design compared differences in patterns of physician answer accuracy and answer confidence among physicians participating in both a high-stakes and a low-stakes longitudinal assessment of the American Board of Family Medicine.</p><p><strong>Results: </strong>After 1 and 2 years, participants were more often correct but less confident in their accuracy on a higher-stakes longitudinal knowledge assessment compared with a lower-stakes assessment. There were no differences in question difficulty between the two platforms. Variation existed between platforms in time spent answering questions, use of resources to answer questions, and perceived question relevance to practice.</p><p><strong>Discussion: </strong>This novel study of physician certification suggests that the accuracy of physician performance increases with higher stakes, even as self-reported confidence in their knowledge declines. It suggests that physicians may be more engaged in higher-stakes compared with lower-stakes assessments. With medical knowledge growing exponentially, these analyses provide an example of the complementary roles of higher- and lower-stakes knowledge assessment in supporting physician learning during continuing specialty board certification.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"2-10"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9101286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2023-04-12DOI: 10.1097/CEH.0000000000000508
Guilherme S Nunes, Brenda D Guterres, Anna Carolina O Machado, Anna Julia M Dangui, Rafaela A Schreiner, Inaihá Laureano Benincá, Alessandro Haupenthal
Introduction: Comprehending scientific information have been reported as a barrier in evidence-based practice (EBP) adoption. This survey research aimed to verify the preferred sources of information for acquiring knowledge about physiotherapy and the association between types of information source and barriers in EBP implementation.
Methods: A total of 610 physiotherapists were included and answered an online questionnaire about the preferred sources for searching physiotherapy-related information and possible barriers in EBP implementation.
Results: Physiotherapists reported scientific resources as the preferred source of information, scientific databases (31%), followed by scientific articles (25%). The main barrier cited in EBP implementation was the difficulty in obtaining full-text articles (34%), followed by lack of statistical knowledge (30%). The use of peer-reviewed resources as the most preferred source of information is associated with the presence of issues in comprehending scientific information.
Discussion: Although the positive attitude toward the use of scientific information, the findings raised question regarding the proper translation of scientific information to clinical practice. The importance of scientific information seems to be a well-established attitude among physiotherapists. However, there is a clear need for strategies aiming to improve the understanding of scientific information and consequently facilitate EBP implementation.
{"title":"Where do Physiotherapists Search for Information? Barriers in Translating Scientific Information into Clinical Practice.","authors":"Guilherme S Nunes, Brenda D Guterres, Anna Carolina O Machado, Anna Julia M Dangui, Rafaela A Schreiner, Inaihá Laureano Benincá, Alessandro Haupenthal","doi":"10.1097/CEH.0000000000000508","DOIUrl":"10.1097/CEH.0000000000000508","url":null,"abstract":"<p><strong>Introduction: </strong>Comprehending scientific information have been reported as a barrier in evidence-based practice (EBP) adoption. This survey research aimed to verify the preferred sources of information for acquiring knowledge about physiotherapy and the association between types of information source and barriers in EBP implementation.</p><p><strong>Methods: </strong>A total of 610 physiotherapists were included and answered an online questionnaire about the preferred sources for searching physiotherapy-related information and possible barriers in EBP implementation.</p><p><strong>Results: </strong>Physiotherapists reported scientific resources as the preferred source of information, scientific databases (31%), followed by scientific articles (25%). The main barrier cited in EBP implementation was the difficulty in obtaining full-text articles (34%), followed by lack of statistical knowledge (30%). The use of peer-reviewed resources as the most preferred source of information is associated with the presence of issues in comprehending scientific information.</p><p><strong>Discussion: </strong>Although the positive attitude toward the use of scientific information, the findings raised question regarding the proper translation of scientific information to clinical practice. The importance of scientific information seems to be a well-established attitude among physiotherapists. However, there is a clear need for strategies aiming to improve the understanding of scientific information and consequently facilitate EBP implementation.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"75-78"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9662921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Contextual factors can influence healthcare professionals' (HCPs) competencies, yet there is a scarcity of research on how to optimally measure these factors. The aim of this study was to develop and validate a comprehensive tool for HCPs to document the contextual factors likely to influence the maintenance, development, and deployment of professional competencies.
Methods: We used DeVellis' 8-step process for scale development and Messick's unified theory of validity to inform the development and validation of the context tool. Building on results from a scoping review, we generated an item pool of contextual factors articulated around five themes: Leadership and Agency, Values, Policies, Supports, and Demands. A first version of the tool was pilot tested with 127 HCPs and analyzed using the classical test theory. A second version was tested on a larger sample (n = 581) and analyzed using the Rasch rating scale model.
Results: First version of the tool: we piloted 117 items that were grouped as per the themes related to contextual factors and rated on a 5-point Likert scale. Cronbach alpha for the set of 12 retained items per scale ranged from 0.75 to 0.94. Second version of the tool included 60 items: Rasch analysis showed that four of the five scales (ie, Leadership and Agency, Values, Policies, Supports) can be used as unidimensional scales, whereas the fifth scale (Demands) had to be split into two unidimensional scales (Demands and Overdemands).
Discussion: Validity evidence documented for content and internal structure is encouraging and supports the use of the McGill context tool. Future research will provide additional validity evidence and cross-cultural translation.
{"title":"Measuring Health Care Work-Related Contextual Factors: Development of the McGill Context Tool.","authors":"Aliki Thomas, Christina St-Onge, Jean-Sébastien Renaud, Catherine George, Muhammad Zafar Iqbal, Martine Brousseau, Joseph-Omer Dyer, Frances Gallagher, Miriam Lacasse, Isabelle Ledoux, Brigitte Vachon, Annie Rochette","doi":"10.1097/CEH.0000000000000514","DOIUrl":"10.1097/CEH.0000000000000514","url":null,"abstract":"<p><strong>Introduction: </strong>Contextual factors can influence healthcare professionals' (HCPs) competencies, yet there is a scarcity of research on how to optimally measure these factors. The aim of this study was to develop and validate a comprehensive tool for HCPs to document the contextual factors likely to influence the maintenance, development, and deployment of professional competencies.</p><p><strong>Methods: </strong>We used DeVellis' 8-step process for scale development and Messick's unified theory of validity to inform the development and validation of the context tool. Building on results from a scoping review, we generated an item pool of contextual factors articulated around five themes: Leadership and Agency, Values, Policies, Supports, and Demands. A first version of the tool was pilot tested with 127 HCPs and analyzed using the classical test theory. A second version was tested on a larger sample (n = 581) and analyzed using the Rasch rating scale model.</p><p><strong>Results: </strong>First version of the tool: we piloted 117 items that were grouped as per the themes related to contextual factors and rated on a 5-point Likert scale. Cronbach alpha for the set of 12 retained items per scale ranged from 0.75 to 0.94. Second version of the tool included 60 items: Rasch analysis showed that four of the five scales (ie, Leadership and Agency, Values, Policies, Supports) can be used as unidimensional scales, whereas the fifth scale (Demands) had to be split into two unidimensional scales (Demands and Overdemands).</p><p><strong>Discussion: </strong>Validity evidence documented for content and internal structure is encouraging and supports the use of the McGill context tool. Future research will provide additional validity evidence and cross-cultural translation.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"18-27"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10043428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}