Pub Date : 2023-09-01DOI: 10.1177/01632787231165797
Oswin Chang, Anne M Holbrook, Simran Lohit, Jiawen Deng, Janice Xu, Munil Lee, Alan Cheng
Objective Structured Clinical Examinations (OSCEs) and written tests are commonly used to assess health professional students, but it remains unclear whether the additional human resources and expenses required for OSCEs, both in-person and online, are worthwhile for assessing competencies. This scoping review summarized literature identified by searching MEDLINE and EMBASE comparing 1) OSCEs and written tests and 2) in-person and online OSCEs, for assessing health professional trainees' competencies. For Q1, 21 studies satisfied inclusion criteria. The most examined health profession was medical trainees (19, 90.5%), the comparison was most frequently OSCEs versus multiple-choice questions (MCQs) (18, 85.7%), and 18 (87.5%) examined the same competency domain. Most (77.5%) total score correlation coefficients between testing methods were weak (r < 0.40). For Q2, 13 articles were included. In-person and online OSCEs were most used for medical trainees (9, 69.2%), checklists were the most prevalent evaluation scheme (7, 63.6%), and 14/17 overall score comparisons were not statistically significantly different. Generally low correlations exist between MCQ and OSCE scores, providing insufficient evidence as to whether OSCEs provide sufficient value to be worth their additional cost. Online OSCEs may be a viable alternative to in-person OSCEs for certain competencies where technical challenges can be met.
{"title":"Comparability of Objective Structured Clinical Examinations (OSCEs) and Written Tests for Assessing Medical School Students' Competencies: A Scoping Review.","authors":"Oswin Chang, Anne M Holbrook, Simran Lohit, Jiawen Deng, Janice Xu, Munil Lee, Alan Cheng","doi":"10.1177/01632787231165797","DOIUrl":"https://doi.org/10.1177/01632787231165797","url":null,"abstract":"<p><p>Objective Structured Clinical Examinations (OSCEs) and written tests are commonly used to assess health professional students, but it remains unclear whether the additional human resources and expenses required for OSCEs, both in-person and online, are worthwhile for assessing competencies. This scoping review summarized literature identified by searching MEDLINE and EMBASE comparing 1) OSCEs and written tests and 2) in-person and online OSCEs, for assessing health professional trainees' competencies. For Q1, 21 studies satisfied inclusion criteria. The most examined health profession was medical trainees (19, 90.5%), the comparison was most frequently OSCEs versus multiple-choice questions (MCQs) (18, 85.7%), and 18 (87.5%) examined the same competency domain. Most (77.5%) total score correlation coefficients between testing methods were weak (<i>r</i> < 0.40). For Q2, 13 articles were included. In-person and online OSCEs were most used for medical trainees (9, 69.2%), checklists were the most prevalent evaluation scheme (7, 63.6%), and 14/17 overall score comparisons were not statistically significantly different. Generally low correlations exist between MCQ and OSCE scores, providing insufficient evidence as to whether OSCEs provide sufficient value to be worth their additional cost. Online OSCEs may be a viable alternative to in-person OSCEs for certain competencies where technical challenges can be met.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 3","pages":"213-224"},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10443966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10075023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231164381
Robyn C Ward, Kathy A Baker, Dennis Spence, Colleen Leonard, Alysha Sapp, Shahid A Choudhry
The balance of lifelong learning with assessment for continued certification is a challenge faced by healthcare professions. The value of single-point-in-time assessments has been questioned, and a shift to longitudinal assessments (LA) has been undertaken to assess lifelong learning over-time. This scoping review was conducted to inform healthcare certifying organizations who are considering LA as an assessment tool of competence and lifelong learning in healthcare professionals. A search of 6 databases and grey literature yielded 957 articles. After screening and removal of duplicates, 14 articles were included. Most articles were background studies informing the underpinnings of LA in the form of progress testing, pilot studies, and process of implementation. Progress testing is used in educational settings. Pilot studies reported satisfaction with LA's ease of use, online format, and provision of lifelong learning. Implementation processes reveal that key aspects of success include stakeholder participation, phased rollout, and a publicly available content outline. Initial outcomes data affirm that LA addresses knowledge gaps, and results in improved performance on maintenance of certification exams. Future research is needed to substantiate validity evidence of LA and its correlation with high-stakes exam performance when assessing lifelong learning and continued competence of healthcare professionals over time.
{"title":"Longitudinal Assessment to Evaluate Continued Certification and Lifelong Learning in Healthcare Professionals: A Scoping Review.","authors":"Robyn C Ward, Kathy A Baker, Dennis Spence, Colleen Leonard, Alysha Sapp, Shahid A Choudhry","doi":"10.1177/01632787231164381","DOIUrl":"https://doi.org/10.1177/01632787231164381","url":null,"abstract":"<p><p>The balance of lifelong learning with assessment for continued certification is a challenge faced by healthcare professions. The value of single-point-in-time assessments has been questioned, and a shift to longitudinal assessments (LA) has been undertaken to assess lifelong learning over-time. This scoping review was conducted to inform healthcare certifying organizations who are considering LA as an assessment tool of competence and lifelong learning in healthcare professionals. A search of 6 databases and grey literature yielded 957 articles. After screening and removal of duplicates, 14 articles were included. Most articles were background studies informing the underpinnings of LA in the form of progress testing, pilot studies, and process of implementation. Progress testing is used in educational settings. Pilot studies reported satisfaction with LA's ease of use, online format, and provision of lifelong learning. Implementation processes reveal that key aspects of success include stakeholder participation, phased rollout, and a publicly available content outline. Initial outcomes data affirm that LA addresses knowledge gaps, and results in improved performance on maintenance of certification exams. Future research is needed to substantiate validity evidence of LA and its correlation with high-stakes exam performance when assessing lifelong learning and continued competence of healthcare professionals over time.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 3","pages":"199-212"},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10075024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/01632787231179420
Stacey Pylman, Brian Mavis
The listserv, although considered old technology by some, continues to show benefit for and growth in subscribers. We investigated the roles the DR-ED listserv plays within the medical education community. We asked, Who subscribes? Why do they subscribe? and How do they use the listserv? We conducted a mixed-methods evaluation of the DR-ED listserv based on message content analysis and user surveys. We found the DR-ED listserv fulfills medical educators' need to network collegially; keep current with issues and ideas in the field; share solutions to problems; share resources; and advertise development opportunities. We found two types of listserv engagement: a) one-way engagement by using it as a resource, or two-way engagement by using and sharing resources. Our findings also highlight the value users attribute to virtual resources and the role listservs can play as economical professional development in a time of constrained costs, and our analysis methods can be used to guide future listserv evaluations. We conclude the relatively easy access to a global medical education listserv is one strategy to create a community of practice for medical education practitioners.
{"title":"Evaluating the DR-ED Listserv as a Medical Education Networking and Support Tool.","authors":"Stacey Pylman, Brian Mavis","doi":"10.1177/01632787231179420","DOIUrl":"https://doi.org/10.1177/01632787231179420","url":null,"abstract":"<p><p>The listserv, although considered old technology by some, continues to show benefit for and growth in subscribers. We investigated the roles the DR-ED listserv plays within the medical education community. We asked, <i>Who subscribes? Why do they subscribe?</i> and <i>How do they use the listserv?</i> We conducted a mixed-methods evaluation of the DR-ED listserv based on message content analysis and user surveys. We found the DR-ED listserv fulfills medical educators' need to network collegially; keep current with issues and ideas in the field; share solutions to problems; share resources; and advertise development opportunities. We found two types of listserv engagement: a) one-way engagement by using it as a resource, or two-way engagement by using and sharing resources. Our findings also highlight the value users attribute to virtual resources and the role listservs can play as economical professional development in a time of constrained costs, and our analysis methods can be used to guide future listserv evaluations. We conclude the relatively easy access to a global medical education listserv is one strategy to create a community of practice for medical education practitioners.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 3","pages":"233-241"},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10453037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study aimed to analyze the psychometric properties of a newly developed Chinese screening tool, the Chinese Version of the Speech Disorders in Parkinson's Disease Questionnaire (SDPD-C). The SDPD-C contains a 24-item questionnaire with four assessment domains. Overall, 93 patients with idiopathic Parkinson's disease (PD) (age 70.1 ± 8.9 years) and 76 healthy older adults (age 67.2 ± 8.1 years) participated in the psychometric analysis study. The internal consistency of the SDPD-C was .91 (four dimensions: .69-.85), and test-retest reliability was .91 (four dimensions: .85-.88). The SDPD-C was highly correlated with the Voice Handicap Index-10 and Movement Disorder Society-Unified Parkinson's Disease Rating Scale II 2.1 (r = .83 and .78, respectively). The SDPD-C scores also differed significantly between stages 1 and 4 of the Hoehn and Yahr Scale (p < .05). The area under the receiver operating characteristic curve was .955 (95% confidence interval, .927-.983; asymptotic significance p < .001), and the optimal cut-off score of this study was 36, with a sensitivity of .849 and specificity of .947. The results indicate that SDPD-C showed good reliability, validity, accuracy, and discrimination. It can be used as a screening tool for speech disorders in patients with PD.
{"title":"Evaluation of the Psychometric Properties of a Newly Developed Chinese Screening Tool for Speech Disorders in Patients With Parkinson's Disease.","authors":"Chi-Lin Chen, Ching-Huang Lin, Chen-San Su, Hsiang-Chun Cheng, Li-Mei Chen, Rong-Ju Cherng","doi":"10.1177/01632787221108458","DOIUrl":"https://doi.org/10.1177/01632787221108458","url":null,"abstract":"<p><p>The study aimed to analyze the psychometric properties of a newly developed Chinese screening tool, the Chinese Version of the Speech Disorders in Parkinson's Disease Questionnaire (SDPD-C). The SDPD-C contains a 24-item questionnaire with four assessment domains. Overall, 93 patients with idiopathic Parkinson's disease (PD) (age 70.1 ± 8.9 years) and 76 healthy older adults (age 67.2 ± 8.1 years) participated in the psychometric analysis study. The internal consistency of the SDPD-C was .91 (four dimensions: .69-.85), and test-retest reliability was .91 (four dimensions: .85-.88). The SDPD-C was highly correlated with the Voice Handicap Index-10 and Movement Disorder Society-Unified Parkinson's Disease Rating Scale II 2.1 (r = .83 and .78, respectively). The SDPD-C scores also differed significantly between stages 1 and 4 of the Hoehn and Yahr Scale (<i>p</i> < .05). The area under the receiver operating characteristic curve was .955 (95% confidence interval, .927-.983; asymptotic significance <i>p</i> < .001), and the optimal cut-off score of this study was 36, with a sensitivity of .849 and specificity of .947. The results indicate that SDPD-C showed good reliability, validity, accuracy, and discrimination. It can be used as a screening tool for speech disorders in patients with PD.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"127-134"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9520855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/01632787231163901
{"title":"Erratum.","authors":"","doi":"10.1177/01632787231163901","DOIUrl":"https://doi.org/10.1177/01632787231163901","url":null,"abstract":"","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"194"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9865366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The JHand is an easy-to-understand questionnaire that includes questions that exclude hand dominance. It was developed to evaluate patients with hand and elbow disorders. However, JHand has not been translated and validated in the Turkish language. The aim of this study is to investigate the psychometric properties of the culturally adapted Turkish version of the JHand for Turkish patients. A total of 262 patients were included in the study. JHand, Disabilities of the Arm, Shoulder, and Hand Questionnaire, and Hand20 were used to evaluate patients. Internal consistency and test-retest analyses were applied to determine the reliability of the Turkish version of the JHand. Confirmatory factor analysis and similar scale validity were used to determine its validity. The Turkish version of the JHand showed high levels of internal consistency and excellent test-retest reliability (Cronbach α = 0.907, ICC = 0.923). The model fit indices of the Turkish version of the JHand had good and acceptable fit with reference values. Statistically positive and very strong correlations were found between JHand and DASH (r = .825, p < .001) as well as the JHand and Hand20 (r = .846, p < .001). The Turkish version of the JHand had excellent internal consistency and test-retest reliability as well as a high level of validity.
JHand是一份易于理解的问卷,其中包括排除手优势的问题。它是用来评估手和肘部疾病患者的。然而,JHand还没有被翻译成土耳其语并得到验证。本研究的目的是调查土耳其患者的文化适应的土耳其版本的JHand的心理测量特性。研究共纳入262例患者。采用JHand、臂、肩、手残疾问卷和Hand20对患者进行评估。采用内部一致性和重测分析来确定土耳其版JHand的可靠性。采用验证性因子分析和相似量表效度来确定其效度。土耳其版JHand具有较高的内部一致性和极好的重测信度(Cronbach α = 0.907, ICC = 0.923)。土耳其版JHand的模型拟合指标与参考值拟合良好,可接受。JHand与DASH (r = .825, p < .001)、JHand与Hand20 (r = .846, p < .001)呈显著正相关。土耳其版的JHand具有优异的内部一致性和重测信度以及高水平的效度。
{"title":"Psychometric Properties of the Turkish Version of the JHand for the Patient-Oriented Outcome Measure for Patients with Hand and Elbow Disorders.","authors":"Hasan Atacan Tonak, Yener Aydin, Burc Ozcanyuz, Haluk Ozcanli, Kosuke Uehara, Yutaka Morizaki","doi":"10.1177/01632787221146245","DOIUrl":"https://doi.org/10.1177/01632787221146245","url":null,"abstract":"<p><p>The JHand is an easy-to-understand questionnaire that includes questions that exclude hand dominance. It was developed to evaluate patients with hand and elbow disorders. However, JHand has not been translated and validated in the Turkish language. The aim of this study is to investigate the psychometric properties of the culturally adapted Turkish version of the JHand for Turkish patients. A total of 262 patients were included in the study. JHand, Disabilities of the Arm, Shoulder, and Hand Questionnaire, and Hand20 were used to evaluate patients. Internal consistency and test-retest analyses were applied to determine the reliability of the Turkish version of the JHand. Confirmatory factor analysis and similar scale validity were used to determine its validity. The Turkish version of the JHand showed high levels of internal consistency and excellent test-retest reliability (Cronbach α = 0.907, ICC = 0.923). The model fit indices of the Turkish version of the JHand had good and acceptable fit with reference values. Statistically positive and very strong correlations were found between JHand and DASH (r = .825, p < .001) as well as the JHand and Hand20 (r = .846, p < .001). The Turkish version of the JHand had excellent internal consistency and test-retest reliability as well as a high level of validity.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"152-158"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9521353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The modified Dynamic Gait Index (mDGI) is one of the valid instruments used in the evaluation of gait disorders. This study aimed to translate the mDGI into Turkish and evaluate the evidence for its reliability and validity for use in an elderly population. For test-retest reliability, the mDGI was administered twice, and for inter-rater reliability, the mDGI was administered alone on the same day by two raters. Concurrent validity of the mDGI was assessed using Pearson’s correlation analysis between the Turkish version of the mDGI score and the Timed Up and Go (TUG), Berg Balance Scale (BBS), and 10-m Walk Test (10-MWT), respectively. The internal consistency of the mDGI was found to be excellent (Cronbach’s alpha = 0.97) and test-retest (ICC = 0.95; 95% Cl (0.84–0.95)) and inter-rater reliability (ICC = 0.95; 95% Cl (0.85–0.95)) were excellent. A negative, moderate correlation was found between mDGI and TUG (r = −0.73, p < .0001), and a positive, moderate correlation with BBS (r = 0.71, p < .0001) and 10-MWT (r = 0.72, p < .0001). The Turkish version of the mDGI was found to be a valid and reliable assessment instrument for gait and balance in the elderly.
改进的动态步态指数(mDGI)是评价步态障碍的有效工具之一。本研究旨在将mDGI翻译成土耳其语,并评估其在老年人群中使用的可靠性和有效性。为了测试-重测信度,mDGI进行了两次管理,为了评估者之间的信度,mDGI在同一天由两个评估者单独管理。mDGI的并发效度分别采用土耳其版mDGI评分与time Up and Go (TUG)、Berg Balance Scale (BBS)和10米步行测试(10-MWT)之间的Pearson相关分析进行评估。mDGI的内部一致性很好(Cronbach’s alpha = 0.97),重测(ICC = 0.95;95% Cl(0.84-0.95))和评估间信度(ICC = 0.95;95% Cl(0.85 ~ 0.95))为优。mDGI与TUG呈负、中度相关(r = -0.73, p < 0.0001),与BBS呈正、中度相关(r = 0.71, p < 0.0001),与10-MWT呈正相关(r = 0.72, p < 0.0001)。土耳其版本的mDGI被认为是老年人步态和平衡的有效和可靠的评估工具。
{"title":"Reliability and Validity of the Turkish Version of the Modified Dynamic Gait Index in the Elderly.","authors":"Emrah Zirek, Rustem Mustafaoglu, Aynur Cicek, Ishtiaq Ahmed, Savvas Mavromoustakos","doi":"10.1177/01632787221128311","DOIUrl":"https://doi.org/10.1177/01632787221128311","url":null,"abstract":"The modified Dynamic Gait Index (mDGI) is one of the valid instruments used in the evaluation of gait disorders. This study aimed to translate the mDGI into Turkish and evaluate the evidence for its reliability and validity for use in an elderly population. For test-retest reliability, the mDGI was administered twice, and for inter-rater reliability, the mDGI was administered alone on the same day by two raters. Concurrent validity of the mDGI was assessed using Pearson’s correlation analysis between the Turkish version of the mDGI score and the Timed Up and Go (TUG), Berg Balance Scale (BBS), and 10-m Walk Test (10-MWT), respectively. The internal consistency of the mDGI was found to be excellent (Cronbach’s alpha = 0.97) and test-retest (ICC = 0.95; 95% Cl (0.84–0.95)) and inter-rater reliability (ICC = 0.95; 95% Cl (0.85–0.95)) were excellent. A negative, moderate correlation was found between mDGI and TUG (r = −0.73, p < .0001), and a positive, moderate correlation with BBS (r = 0.71, p < .0001) and 10-MWT (r = 0.72, p < .0001). The Turkish version of the mDGI was found to be a valid and reliable assessment instrument for gait and balance in the elderly.","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"135-139"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9899012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01Epub Date: 2022-05-04DOI: 10.1177/01632787221096904
Mohammad Ali Zakeri, Ali Esmaeili Nadimi, Golamreza Bazmandegan, Maryam Zakeri, Mahlagha Dehghan
The Patient Activation Measure (PAM) is a 13-item questionnaire that assesses patients' knowledge, skills, and confidence in self-management. The current study aimed to translate the American version of the PAM-13 into Persian and test the psychometric properties of the Persian version among chronic patients. This cross-sectional study was conducted on 438 chronically ill patients in Rafsanjan, Iran from May to November 2019. The American version of the PAM-13 was translated into Persian using a standardized forward-backward translation method. Internal consistency, test-retest reliability, face and content validity, as well as construct validity (structural and convergent validity) were all assessed. The content validity index of the Patient Activation Measure-13 Persian (PAM-13-P) was 0.91. Exploratory and confirmatory factor analyses showed that the PAM-13-P had a meaningful structural validity. The PAM-13-P scores were negatively correlated with the Partner in Health Measure (PIH) (r = -0.29, p < 0.001). In addition, the PAM13-P scores were positively correlated with the Satisfaction with Life Scale (SWLS) (r = 0.31, p < 0.001). The internal consistency was 0.88, and the repeatability was excellent [Intraclass Correlation Coefficient (ICC):0.96 and confidence interval (CI): 0.94-0.98]. This study demonstrates that the PAM-13-P is a reliable and valid measure for assessing activation among chronically ill patients. The PAM-13-P scale assesses the level of self-management of chronic patients and identifies appropriate care strategies to meet their needs.
{"title":"Psychometric Evaluation of Chronic Patients Using the Persian Version of Patient Activation Measure (PAM).","authors":"Mohammad Ali Zakeri, Ali Esmaeili Nadimi, Golamreza Bazmandegan, Maryam Zakeri, Mahlagha Dehghan","doi":"10.1177/01632787221096904","DOIUrl":"10.1177/01632787221096904","url":null,"abstract":"<p><p>The Patient Activation Measure (PAM) is a 13-item questionnaire that assesses patients' knowledge, skills, and confidence in self-management. The current study aimed to translate the American version of the PAM-13 into Persian and test the psychometric properties of the Persian version among chronic patients. This cross-sectional study was conducted on 438 chronically ill patients in Rafsanjan, Iran from May to November 2019. The American version of the PAM-13 was translated into Persian using a standardized forward-backward translation method. Internal consistency, test-retest reliability, face and content validity, as well as construct validity (structural and convergent validity) were all assessed. The content validity index of the Patient Activation Measure-13 Persian (PAM-13-P) was 0.91. Exploratory and confirmatory factor analyses showed that the PAM-13-P had a meaningful structural validity. The PAM-13-P scores were negatively correlated with the Partner in Health Measure (PIH) (r = -0.29, <i>p</i> < 0.001). In addition, the PAM13-P scores were positively correlated with the Satisfaction with Life Scale (SWLS) (r = 0.31, <i>p</i> < 0.001). The internal consistency was 0.88, and the repeatability was excellent [Intraclass Correlation Coefficient (ICC):0.96 and confidence interval (CI): 0.94-0.98]. This study demonstrates that the PAM-13-P is a reliable and valid measure for assessing activation among chronically ill patients. The PAM-13-P scale assesses the level of self-management of chronic patients and identifies appropriate care strategies to meet their needs.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"115-126"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9580736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study evaluates the psychometric properties of the Access of Older Adults to Outpatient Primary-Care Health Services Scale (AOAOPHSS), in research conducted among 707 Mexican older adults selected by convenience from 14 rural and one urban locations. The AOAOPHSS explores 10 dimensions of two integrated subscales: Accessibility and Personal Abilities. Data analysis was performed in five phases. First, potentially biased responses were identified. Second, the response efficiency of the items and their association with external variables were evaluated. Third, the basic properties of the scores for the subscales' dimensions of the AOAOPHSS were identified using non-parametric Mokken Scaling Analysis (MSA). Fourth, the Structural Equation Modeling methodology was used to identify the properties of the internal structure of the latent construct. Finally, reliability and internal consistency were evaluated at both score and item levels. The following findings emerged. 13 items with inefficient response options were removed, and 24 were retained using the MSA. The latent structure of the latter was defined based on 21 items of five Accessibility Subscale dimensions. Its internal consistency reliability ranged between 0.67 and 0.81 (omega coefficients) and between 0.61 and 0.78 (alpha coefficients). Accordingly, this paper discusses the overall implications of using the Accessibility Subscale.
{"title":"Psychometric Properties of the Access of Older Adults to Outpatient Primary-Care Health Services Scale.","authors":"Gerardo Santoyo-Sánchez, Hortensia Reyes-Morales, Sergio Flores-Hernández, Blanca Estela Pelcastre-Villafuerte, César Merino-Soto","doi":"10.1177/01632787231158806","DOIUrl":"https://doi.org/10.1177/01632787231158806","url":null,"abstract":"<p><p>This study evaluates the psychometric properties of the Access of Older Adults to Outpatient Primary-Care Health Services Scale (AOAOPHSS), in research conducted among 707 Mexican older adults selected by convenience from 14 rural and one urban locations. The AOAOPHSS explores 10 dimensions of two integrated subscales: Accessibility and Personal Abilities. Data analysis was performed in five phases. First, potentially biased responses were identified. Second, the response efficiency of the items and their association with external variables were evaluated. Third, the basic properties of the scores for the subscales' dimensions of the AOAOPHSS were identified using non-parametric Mokken Scaling Analysis (MSA). Fourth, the Structural Equation Modeling methodology was used to identify the properties of the internal structure of the latent construct. Finally, reliability and internal consistency were evaluated at both score and item levels. The following findings emerged. 13 items with inefficient response options were removed, and 24 were retained using the MSA. The latent structure of the latter was defined based on 21 items of five Accessibility Subscale dimensions. Its internal consistency reliability ranged between 0.67 and 0.81 (omega coefficients) and between 0.61 and 0.78 (alpha coefficients). Accordingly, this paper discusses the overall implications of using the Accessibility Subscale.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"159-169"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9581315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1177/01632787221127377
Muge Dereli, Turhan Kahraman, Christopher R France
The Pain Resilience Scale (PRS) is a useful tool that evaluates behavioral engagement and adaptively regulates cognitions and emotions despite the pain. This study aimed to translate the PRS to Turkish and investigate its psychometric properties. The Turkish version of PRS was completed online by 332 healthy adults, and a subset of 105 respondents was re-assessed after 7-14 days. The reliability of the adapted measure was evaluated in terms of internal consistency, relative, and absolute test-retest reliability. Validity was evaluated in terms of structural, construct, and known-group validity using positive and negative psychological scales. The Turkish version of PRS has a three-factor structure and its cumulative variance is 78.06%. The total PRS score and its subscales correlated positively with pain self-efficacy, general resilience, and quality of life, and negatively with pain catastrophizing, kinesiophobia, anxiety, depression, and disability. The PRS scores were significantly higher in those with high general resilience (p < 0.001). The PRS had high internal consistency and test-retest reliability. Standard Error of Measurement (SEM) and Minimum Detectable Difference (MDD) were calculated as 2.9 and 8.0, respectively. The Turkish version of PRS is a reliable and valid instrument for measuring pain resilience in terms of behavioral perseverance and cognitive positivity.
{"title":"Cross-Cultural Adaptation and Psychometric Validation of the Turkish Version of Pain Resilience Scale.","authors":"Muge Dereli, Turhan Kahraman, Christopher R France","doi":"10.1177/01632787221127377","DOIUrl":"https://doi.org/10.1177/01632787221127377","url":null,"abstract":"<p><p>The Pain Resilience Scale (PRS) is a useful tool that evaluates behavioral engagement and adaptively regulates cognitions and emotions despite the pain. This study aimed to translate the PRS to Turkish and investigate its psychometric properties. The Turkish version of PRS was completed online by 332 healthy adults, and a subset of 105 respondents was re-assessed after 7-14 days. The reliability of the adapted measure was evaluated in terms of internal consistency, relative, and absolute test-retest reliability. Validity was evaluated in terms of structural, construct, and known-group validity using positive and negative psychological scales. The Turkish version of PRS has a three-factor structure and its cumulative variance is 78.06%. The total PRS score and its subscales correlated positively with pain self-efficacy, general resilience, and quality of life, and negatively with pain catastrophizing, kinesiophobia, anxiety, depression, and disability. The PRS scores were significantly higher in those with high general resilience (<i>p</i> < 0.001). The PRS had high internal consistency and test-retest reliability. Standard Error of Measurement (SEM) and Minimum Detectable Difference (MDD) were calculated as 2.9 and 8.0, respectively. The Turkish version of PRS is a reliable and valid instrument for measuring pain resilience in terms of behavioral perseverance and cognitive positivity.</p>","PeriodicalId":12315,"journal":{"name":"Evaluation & the Health Professions","volume":"46 2","pages":"140-151"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9882707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}