Pub Date : 2023-09-01DOI: 10.1177/01632787231188182
Raymond Boon Tar Lim, Kenneth Wee Beng Hoe, Claire Gek Ling Tan, Huili Zheng
This systematic review aims to evaluate the effectiveness of systems-based practice (SBP) curricula from the perspective of health professions students and workers. A total of 8468 citations were sourced from six electronic databases and manual searches conducted independently by two researchers, of which 44 studies were eventually included. A meta-analysis using a random effects model and a meta-synthesis using the thematic synthesis approach were conducted. Most studies targeted medical students, residents, and resident physicians from various clinical specialties. Almost half of all studies focused on didactic or knowledge-based interventions to teach SBP. About a third of all studies measured non-self-evaluated knowledge change, clinical abilities, and clinical outcomes. Both meta-analysis and meta-synthesis results revealed positive outcomes of increased knowledge of SBP, increased recognition of SBP as a core competency in one's profession, and increased application of SBP knowledge in one's profession. Meta-synthesis results also revealed negative outcomes at the institutional and teacher/health professions level. This review highlights the importance of SBP education and supports the effectiveness of SBP curricula. There is a need to address the negative outcomes at the institutional and teacher/health professions level. Moreover, future studies could investigate the integration of self-assessment outcomes with comparison to some external standard.
{"title":"A Systematic Review on the Effectiveness of Systems-Based Practice Curricula in Health Professions Education.","authors":"Raymond Boon Tar Lim, Kenneth Wee Beng Hoe, Claire Gek Ling Tan, Huili Zheng","doi":"10.1177/01632787231188182","DOIUrl":"https://doi.org/10.1177/01632787231188182","url":null,"abstract":"<p><p>This systematic review aims to evaluate the effectiveness of systems-based practice (SBP) curricula from the perspective of health professions students and workers. A total of 8468 citations were sourced from six electronic databases and manual searches conducted independently by two researchers, of which 44 studies were eventually included. A meta-analysis using a random effects model and a meta-synthesis using the thematic synthesis approach were conducted. Most studies targeted medical students, residents, and resident physicians from various clinical specialties. Almost half of all studies focused on didactic or knowledge-based interventions to teach SBP. About a third of all studies measured non-self-evaluated knowledge change, clinical abilities, and clinical outcomes. Both meta-analysis and meta-synthesis results revealed positive outcomes of increased knowledge of SBP, increased recognition of SBP as a core competency in one's profession, and increased application of SBP knowledge in one's profession. Meta-synthesis results also revealed negative outcomes at the institutional and teacher/health professions level. This review highlights the importance of SBP education and supports the effectiveness of SBP curricula. There is a need to address the negative outcomes at the institutional and teacher/health professions level. Moreover, future studies could investigate the integration of self-assessment outcomes with comparison to some external standard.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10133807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231158128
Janae K Heath, Caitlin B Clancy, William Pluta, Gary E Weissman, Ursula Anderson, Jennifer R Kogan, C Jessica Dine, Judy A Shea
Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.
{"title":"Natural Language Processing of Learners' Evaluations of Attendings to Identify Professionalism Lapses.","authors":"Janae K Heath, Caitlin B Clancy, William Pluta, Gary E Weissman, Ursula Anderson, Jennifer R Kogan, C Jessica Dine, Judy A Shea","doi":"10.1177/01632787231158128","DOIUrl":"https://doi.org/10.1177/01632787231158128","url":null,"abstract":"<p><p>Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/fd/77/10.1177_01632787231158128.PMC10443919.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10078631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231164782
Kang Liu, Xiaofang Yu, Yan Cai, Dongbo Tu
Daily hassles have a larger effect on our health and well-being than those major events in daily life. The present study aimed to evaluate the psychometric properties and measurement invariance of the LIVES Daily Hassles Scale (LIVES-DHS) in Chinese samples, which consisted of 815 people at work aged between 20 and 60 years old. The results of both Explanatory Factor Analysis and Confirmatory Factor Analysis showed that the five-factor model solution was better than other solutions, which supported the original structure of LIVES-DHS. The Cronbach's alpha coefficients of the five subdimensions varied between.721 and.818, with the entire scale of.920, and McDonald's values of the five subdimensions varied between.716 and.821, with the entire scale of.936. The results also showed the support for measurement invariance of the five-factor model across different groups, which is the first to offer evidence for configural, metric, scalar and strict invariance of LIVES-DHS across gender, age and educational groups.
{"title":"Psychometric Properties and Measurement Invariance of the LIVES Daily Hassles Scale in Chinese Samples.","authors":"Kang Liu, Xiaofang Yu, Yan Cai, Dongbo Tu","doi":"10.1177/01632787231164782","DOIUrl":"https://doi.org/10.1177/01632787231164782","url":null,"abstract":"<p><p>Daily hassles have a larger effect on our health and well-being than those major events in daily life. The present study aimed to evaluate the psychometric properties and measurement invariance of the LIVES Daily Hassles Scale (LIVES-DHS) in Chinese samples, which consisted of 815 people at work aged between 20 and 60 years old. The results of both Explanatory Factor Analysis and Confirmatory Factor Analysis showed that the five-factor model solution was better than other solutions, which supported the original structure of LIVES-DHS. The Cronbach's alpha coefficients of the five subdimensions varied between.721 and.818, with the entire scale of.920, and McDonald's <math><mrow><mi>ω</mi></mrow></math> values of the five subdimensions varied between.716 and.821, with the entire scale of.936. The results also showed the support for measurement invariance of the five-factor model across different groups, which is the first to offer evidence for configural, metric, scalar and strict invariance of LIVES-DHS across gender, age and educational groups.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10177925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787221134712
Maria Matarese, Roberta Pendoni, Davide Ausili, Ercole Vellone, Maddalena De Maria
The study tested the construct validity and reliability of the Caregiver Contribution to Self-Care of Chronic Obstructive Pulmonary Disease (COPD) Inventory and the Caregiver Self-Efficacy in Contributing to Self-Care of COPD Scale. The two instruments were developed by modifying the Self-Care of COPD Inventory and Self-Care Self-Efficacy Scale in COPD into caregiver versions. The psychometric properties were tested in a convenience sample of 261 informal caregivers of COPD patients recruited in Italy in two cross-sectional studies. Structural validity was tested by confirmatory factor analysis, construct validity by posing several hypotheses, and internal consistency through factor score determinacy and global reliability index for multidimensional scales. In confirmatory factor analysis, the caregiver contribution to self-care maintenance, monitoring and management scales, composing the Caregiver Contribution to Self-Care of COPD Inventory, presented good fit indices. Global reliability indices ranged 0.75-0.88. The caregiver self-efficacy scale presented a comparative fit index of 0.96 and a global reliability index of 0.82. The caregiver contribution to self-care and the caregiver self-efficacy scales correlated moderately among themselves and with the patient versions of the scales, and scores were higher with caregiver-oriented dyadic care types and female caregivers. Our study provides evidence of the two instruments' construct validity and internal consistency.
{"title":"Validity and Reliability of Caregiver Contribution to Self-Care of Chronic Obstructive Pulmonary Disease Inventory and Caregiver Self-Efficacy in Contributing to Self-Care Scale.","authors":"Maria Matarese, Roberta Pendoni, Davide Ausili, Ercole Vellone, Maddalena De Maria","doi":"10.1177/01632787221134712","DOIUrl":"https://doi.org/10.1177/01632787221134712","url":null,"abstract":"<p><p>The study tested the construct validity and reliability of the Caregiver Contribution to Self-Care of Chronic Obstructive Pulmonary Disease (COPD) Inventory and the Caregiver Self-Efficacy in Contributing to Self-Care of COPD Scale. The two instruments were developed by modifying the Self-Care of COPD Inventory and Self-Care Self-Efficacy Scale in COPD into caregiver versions. The psychometric properties were tested in a convenience sample of 261 informal caregivers of COPD patients recruited in Italy in two cross-sectional studies. Structural validity was tested by confirmatory factor analysis, construct validity by posing several hypotheses, and internal consistency through factor score determinacy and global reliability index for multidimensional scales. In confirmatory factor analysis, the caregiver contribution to self-care maintenance, monitoring and management scales, composing the Caregiver Contribution to Self-Care of COPD Inventory, presented good fit indices. Global reliability indices ranged 0.75-0.88. The caregiver self-efficacy scale presented a comparative fit index of 0.96 and a global reliability index of 0.82. The caregiver contribution to self-care and the caregiver self-efficacy scales correlated moderately among themselves and with the patient versions of the scales, and scores were higher with caregiver-oriented dyadic care types and female caregivers. Our study provides evidence of the two instruments' construct validity and internal consistency.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10132192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231165797
Oswin Chang, Anne M Holbrook, Simran Lohit, Jiawen Deng, Janice Xu, Munil Lee, Alan Cheng
Objective Structured Clinical Examinations (OSCEs) and written tests are commonly used to assess health professional students, but it remains unclear whether the additional human resources and expenses required for OSCEs, both in-person and online, are worthwhile for assessing competencies. This scoping review summarized literature identified by searching MEDLINE and EMBASE comparing 1) OSCEs and written tests and 2) in-person and online OSCEs, for assessing health professional trainees' competencies. For Q1, 21 studies satisfied inclusion criteria. The most examined health profession was medical trainees (19, 90.5%), the comparison was most frequently OSCEs versus multiple-choice questions (MCQs) (18, 85.7%), and 18 (87.5%) examined the same competency domain. Most (77.5%) total score correlation coefficients between testing methods were weak (r < 0.40). For Q2, 13 articles were included. In-person and online OSCEs were most used for medical trainees (9, 69.2%), checklists were the most prevalent evaluation scheme (7, 63.6%), and 14/17 overall score comparisons were not statistically significantly different. Generally low correlations exist between MCQ and OSCE scores, providing insufficient evidence as to whether OSCEs provide sufficient value to be worth their additional cost. Online OSCEs may be a viable alternative to in-person OSCEs for certain competencies where technical challenges can be met.
{"title":"Comparability of Objective Structured Clinical Examinations (OSCEs) and Written Tests for Assessing Medical School Students' Competencies: A Scoping Review.","authors":"Oswin Chang, Anne M Holbrook, Simran Lohit, Jiawen Deng, Janice Xu, Munil Lee, Alan Cheng","doi":"10.1177/01632787231165797","DOIUrl":"https://doi.org/10.1177/01632787231165797","url":null,"abstract":"<p><p>Objective Structured Clinical Examinations (OSCEs) and written tests are commonly used to assess health professional students, but it remains unclear whether the additional human resources and expenses required for OSCEs, both in-person and online, are worthwhile for assessing competencies. This scoping review summarized literature identified by searching MEDLINE and EMBASE comparing 1) OSCEs and written tests and 2) in-person and online OSCEs, for assessing health professional trainees' competencies. For Q1, 21 studies satisfied inclusion criteria. The most examined health profession was medical trainees (19, 90.5%), the comparison was most frequently OSCEs versus multiple-choice questions (MCQs) (18, 85.7%), and 18 (87.5%) examined the same competency domain. Most (77.5%) total score correlation coefficients between testing methods were weak (<i>r</i> < 0.40). For Q2, 13 articles were included. In-person and online OSCEs were most used for medical trainees (9, 69.2%), checklists were the most prevalent evaluation scheme (7, 63.6%), and 14/17 overall score comparisons were not statistically significantly different. Generally low correlations exist between MCQ and OSCE scores, providing insufficient evidence as to whether OSCEs provide sufficient value to be worth their additional cost. Online OSCEs may be a viable alternative to in-person OSCEs for certain competencies where technical challenges can be met.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10443966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10075023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231164381
Robyn C Ward, Kathy A Baker, Dennis Spence, Colleen Leonard, Alysha Sapp, Shahid A Choudhry
The balance of lifelong learning with assessment for continued certification is a challenge faced by healthcare professions. The value of single-point-in-time assessments has been questioned, and a shift to longitudinal assessments (LA) has been undertaken to assess lifelong learning over-time. This scoping review was conducted to inform healthcare certifying organizations who are considering LA as an assessment tool of competence and lifelong learning in healthcare professionals. A search of 6 databases and grey literature yielded 957 articles. After screening and removal of duplicates, 14 articles were included. Most articles were background studies informing the underpinnings of LA in the form of progress testing, pilot studies, and process of implementation. Progress testing is used in educational settings. Pilot studies reported satisfaction with LA's ease of use, online format, and provision of lifelong learning. Implementation processes reveal that key aspects of success include stakeholder participation, phased rollout, and a publicly available content outline. Initial outcomes data affirm that LA addresses knowledge gaps, and results in improved performance on maintenance of certification exams. Future research is needed to substantiate validity evidence of LA and its correlation with high-stakes exam performance when assessing lifelong learning and continued competence of healthcare professionals over time.
{"title":"Longitudinal Assessment to Evaluate Continued Certification and Lifelong Learning in Healthcare Professionals: A Scoping Review.","authors":"Robyn C Ward, Kathy A Baker, Dennis Spence, Colleen Leonard, Alysha Sapp, Shahid A Choudhry","doi":"10.1177/01632787231164381","DOIUrl":"https://doi.org/10.1177/01632787231164381","url":null,"abstract":"<p><p>The balance of lifelong learning with assessment for continued certification is a challenge faced by healthcare professions. The value of single-point-in-time assessments has been questioned, and a shift to longitudinal assessments (LA) has been undertaken to assess lifelong learning over-time. This scoping review was conducted to inform healthcare certifying organizations who are considering LA as an assessment tool of competence and lifelong learning in healthcare professionals. A search of 6 databases and grey literature yielded 957 articles. After screening and removal of duplicates, 14 articles were included. Most articles were background studies informing the underpinnings of LA in the form of progress testing, pilot studies, and process of implementation. Progress testing is used in educational settings. Pilot studies reported satisfaction with LA's ease of use, online format, and provision of lifelong learning. Implementation processes reveal that key aspects of success include stakeholder participation, phased rollout, and a publicly available content outline. Initial outcomes data affirm that LA addresses knowledge gaps, and results in improved performance on maintenance of certification exams. Future research is needed to substantiate validity evidence of LA and its correlation with high-stakes exam performance when assessing lifelong learning and continued competence of healthcare professionals over time.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10075024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231179420
Stacey Pylman, Brian Mavis
The listserv, although considered old technology by some, continues to show benefit for and growth in subscribers. We investigated the roles the DR-ED listserv plays within the medical education community. We asked, Who subscribes? Why do they subscribe? and How do they use the listserv? We conducted a mixed-methods evaluation of the DR-ED listserv based on message content analysis and user surveys. We found the DR-ED listserv fulfills medical educators' need to network collegially; keep current with issues and ideas in the field; share solutions to problems; share resources; and advertise development opportunities. We found two types of listserv engagement: a) one-way engagement by using it as a resource, or two-way engagement by using and sharing resources. Our findings also highlight the value users attribute to virtual resources and the role listservs can play as economical professional development in a time of constrained costs, and our analysis methods can be used to guide future listserv evaluations. We conclude the relatively easy access to a global medical education listserv is one strategy to create a community of practice for medical education practitioners.
{"title":"Evaluating the DR-ED Listserv as a Medical Education Networking and Support Tool.","authors":"Stacey Pylman, Brian Mavis","doi":"10.1177/01632787231179420","DOIUrl":"https://doi.org/10.1177/01632787231179420","url":null,"abstract":"<p><p>The listserv, although considered old technology by some, continues to show benefit for and growth in subscribers. We investigated the roles the DR-ED listserv plays within the medical education community. We asked, <i>Who subscribes? Why do they subscribe?</i> and <i>How do they use the listserv?</i> We conducted a mixed-methods evaluation of the DR-ED listserv based on message content analysis and user surveys. We found the DR-ED listserv fulfills medical educators' need to network collegially; keep current with issues and ideas in the field; share solutions to problems; share resources; and advertise development opportunities. We found two types of listserv engagement: a) one-way engagement by using it as a resource, or two-way engagement by using and sharing resources. Our findings also highlight the value users attribute to virtual resources and the role listservs can play as economical professional development in a time of constrained costs, and our analysis methods can be used to guide future listserv evaluations. We conclude the relatively easy access to a global medical education listserv is one strategy to create a community of practice for medical education practitioners.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10453037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study aimed to analyze the psychometric properties of a newly developed Chinese screening tool, the Chinese Version of the Speech Disorders in Parkinson's Disease Questionnaire (SDPD-C). The SDPD-C contains a 24-item questionnaire with four assessment domains. Overall, 93 patients with idiopathic Parkinson's disease (PD) (age 70.1 ± 8.9 years) and 76 healthy older adults (age 67.2 ± 8.1 years) participated in the psychometric analysis study. The internal consistency of the SDPD-C was .91 (four dimensions: .69-.85), and test-retest reliability was .91 (four dimensions: .85-.88). The SDPD-C was highly correlated with the Voice Handicap Index-10 and Movement Disorder Society-Unified Parkinson's Disease Rating Scale II 2.1 (r = .83 and .78, respectively). The SDPD-C scores also differed significantly between stages 1 and 4 of the Hoehn and Yahr Scale (p < .05). The area under the receiver operating characteristic curve was .955 (95% confidence interval, .927-.983; asymptotic significance p < .001), and the optimal cut-off score of this study was 36, with a sensitivity of .849 and specificity of .947. The results indicate that SDPD-C showed good reliability, validity, accuracy, and discrimination. It can be used as a screening tool for speech disorders in patients with PD.
{"title":"Evaluation of the Psychometric Properties of a Newly Developed Chinese Screening Tool for Speech Disorders in Patients With Parkinson's Disease.","authors":"Chi-Lin Chen, Ching-Huang Lin, Chen-San Su, Hsiang-Chun Cheng, Li-Mei Chen, Rong-Ju Cherng","doi":"10.1177/01632787221108458","DOIUrl":"https://doi.org/10.1177/01632787221108458","url":null,"abstract":"<p><p>The study aimed to analyze the psychometric properties of a newly developed Chinese screening tool, the Chinese Version of the Speech Disorders in Parkinson's Disease Questionnaire (SDPD-C). The SDPD-C contains a 24-item questionnaire with four assessment domains. Overall, 93 patients with idiopathic Parkinson's disease (PD) (age 70.1 ± 8.9 years) and 76 healthy older adults (age 67.2 ± 8.1 years) participated in the psychometric analysis study. The internal consistency of the SDPD-C was .91 (four dimensions: .69-.85), and test-retest reliability was .91 (four dimensions: .85-.88). The SDPD-C was highly correlated with the Voice Handicap Index-10 and Movement Disorder Society-Unified Parkinson's Disease Rating Scale II 2.1 (r = .83 and .78, respectively). The SDPD-C scores also differed significantly between stages 1 and 4 of the Hoehn and Yahr Scale (<i>p</i> < .05). The area under the receiver operating characteristic curve was .955 (95% confidence interval, .927-.983; asymptotic significance <i>p</i> < .001), and the optimal cut-off score of this study was 36, with a sensitivity of .849 and specificity of .947. The results indicate that SDPD-C showed good reliability, validity, accuracy, and discrimination. It can be used as a screening tool for speech disorders in patients with PD.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9520855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The JHand is an easy-to-understand questionnaire that includes questions that exclude hand dominance. It was developed to evaluate patients with hand and elbow disorders. However, JHand has not been translated and validated in the Turkish language. The aim of this study is to investigate the psychometric properties of the culturally adapted Turkish version of the JHand for Turkish patients. A total of 262 patients were included in the study. JHand, Disabilities of the Arm, Shoulder, and Hand Questionnaire, and Hand20 were used to evaluate patients. Internal consistency and test-retest analyses were applied to determine the reliability of the Turkish version of the JHand. Confirmatory factor analysis and similar scale validity were used to determine its validity. The Turkish version of the JHand showed high levels of internal consistency and excellent test-retest reliability (Cronbach α = 0.907, ICC = 0.923). The model fit indices of the Turkish version of the JHand had good and acceptable fit with reference values. Statistically positive and very strong correlations were found between JHand and DASH (r = .825, p < .001) as well as the JHand and Hand20 (r = .846, p < .001). The Turkish version of the JHand had excellent internal consistency and test-retest reliability as well as a high level of validity.
JHand是一份易于理解的问卷,其中包括排除手优势的问题。它是用来评估手和肘部疾病患者的。然而,JHand还没有被翻译成土耳其语并得到验证。本研究的目的是调查土耳其患者的文化适应的土耳其版本的JHand的心理测量特性。研究共纳入262例患者。采用JHand、臂、肩、手残疾问卷和Hand20对患者进行评估。采用内部一致性和重测分析来确定土耳其版JHand的可靠性。采用验证性因子分析和相似量表效度来确定其效度。土耳其版JHand具有较高的内部一致性和极好的重测信度(Cronbach α = 0.907, ICC = 0.923)。土耳其版JHand的模型拟合指标与参考值拟合良好,可接受。JHand与DASH (r = .825, p < .001)、JHand与Hand20 (r = .846, p < .001)呈显著正相关。土耳其版的JHand具有优异的内部一致性和重测信度以及高水平的效度。
{"title":"Psychometric Properties of the Turkish Version of the JHand for the Patient-Oriented Outcome Measure for Patients with Hand and Elbow Disorders.","authors":"Hasan Atacan Tonak, Yener Aydin, Burc Ozcanyuz, Haluk Ozcanli, Kosuke Uehara, Yutaka Morizaki","doi":"10.1177/01632787221146245","DOIUrl":"https://doi.org/10.1177/01632787221146245","url":null,"abstract":"<p><p>The JHand is an easy-to-understand questionnaire that includes questions that exclude hand dominance. It was developed to evaluate patients with hand and elbow disorders. However, JHand has not been translated and validated in the Turkish language. The aim of this study is to investigate the psychometric properties of the culturally adapted Turkish version of the JHand for Turkish patients. A total of 262 patients were included in the study. JHand, Disabilities of the Arm, Shoulder, and Hand Questionnaire, and Hand20 were used to evaluate patients. Internal consistency and test-retest analyses were applied to determine the reliability of the Turkish version of the JHand. Confirmatory factor analysis and similar scale validity were used to determine its validity. The Turkish version of the JHand showed high levels of internal consistency and excellent test-retest reliability (Cronbach α = 0.907, ICC = 0.923). The model fit indices of the Turkish version of the JHand had good and acceptable fit with reference values. Statistically positive and very strong correlations were found between JHand and DASH (r = .825, p < .001) as well as the JHand and Hand20 (r = .846, p < .001). The Turkish version of the JHand had excellent internal consistency and test-retest reliability as well as a high level of validity.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9521353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/01632787231163901
{"title":"Erratum.","authors":"","doi":"10.1177/01632787231163901","DOIUrl":"https://doi.org/10.1177/01632787231163901","url":null,"abstract":"","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9865366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}