The reliability of health promotion program evaluation measures, behavioral and attitudinal measures, and clinical measures is a concern to many health educators. Classical reliability coefficients, such as Cronbach's alpha, apply to narrowly defined, prespecified measurement situations. Classical theory does not provide adequate reliability assessments for criterion-referenced measures, for measurement situations having multiple sources of error, or for aggregate-level variables. Generalizability theory can be used to assess the reliability of measures in these situations that are not adequately modeled by Classical theory. Additionally, Generalizability theory affords a broader view and a deeper understanding of the dependability of measurements and the role of different sources of error in the variability of measures.
{"title":"Generalizability theory: a unified approach to assessing the dependability (reliability) of measurements in the health sciences.","authors":"D M VanLeeuwen, M D Barnes, M Pase","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The reliability of health promotion program evaluation measures, behavioral and attitudinal measures, and clinical measures is a concern to many health educators. Classical reliability coefficients, such as Cronbach's alpha, apply to narrowly defined, prespecified measurement situations. Classical theory does not provide adequate reliability assessments for criterion-referenced measures, for measurement situations having multiple sources of error, or for aggregate-level variables. Generalizability theory can be used to assess the reliability of measures in these situations that are not adequately modeled by Classical theory. Additionally, Generalizability theory affords a broader view and a deeper understanding of the dependability of measurements and the role of different sources of error in the variability of measures.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 4","pages":"302-25"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20715777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As organizations begin to implement work teams, their assessment will ultimately reflect compensation strategies that move away from individual assessment. This will involve not only using multiple raters, but also the use of multiple criteria. Team assessment using multiple raters and multiple criteria is therefore necessitated; however, this can produce differences in ratings due to the leniency or severity of the individual team raters. This study analyzed the ratings of individual members on 31 different teams across 12 different criteria of team performance. Utilizing the many-facet Rasch model, statistical differences between the teams and 12 criteria were calculated.
{"title":"Team assessment utilizing a many-facet Rasch model.","authors":"J M Allen, R E Schumacker","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>As organizations begin to implement work teams, their assessment will ultimately reflect compensation strategies that move away from individual assessment. This will involve not only using multiple raters, but also the use of multiple criteria. Team assessment using multiple raters and multiple criteria is therefore necessitated; however, this can produce differences in ratings due to the leniency or severity of the individual team raters. This study analyzed the ratings of individual members on 31 different teams across 12 different criteria of team performance. Utilizing the many-facet Rasch model, statistical differences between the teams and 12 criteria were calculated.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 2","pages":"142-58"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20580455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present study aimed to develop a short form of the Spanish version of the Nottingham Health Profile (NHP) by means of Rasch analysis. Data from several Spanish studies that included the NHP since 1987 were collected in a common database. Forty-five different studies were included, covering a total of 9,419 subjects both from the general population and with different clinical pathologies. The overall questionnaire (38 items) was simultaneously analyzed using the dichotomous response model. Parameter estimates, model-data fit and separation statistics were computed. The items of the NHP were additionally regrouped into two different scales: Physical (19 items) and Psychological (19 items). Separated Physical and Psychological parameter estimates were produced using the simultaneous item calibrations as anchor values. Misfitting items were deleted, resulting in a 22 item final short form (NHP22)-11 Physical and 11 Psychological-. The evaluation of the item hierarchies confirmed the construct validity of the new questionnaire. To demonstrate the invariance of the NHP22 item calibrations, Rasch analyses were performed separately for each study included in the sample and for several sociodemographic and health status variables. Results confirmed the validity of using the NHP22 item calibrations to measure different groups of people categorized by gender, clinical and health status.
{"title":"Rasch measurement for reducing the items of the Nottingham Health Profile.","authors":"L Prieto, J Alonso, R Lamarca, B D Wright","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The present study aimed to develop a short form of the Spanish version of the Nottingham Health Profile (NHP) by means of Rasch analysis. Data from several Spanish studies that included the NHP since 1987 were collected in a common database. Forty-five different studies were included, covering a total of 9,419 subjects both from the general population and with different clinical pathologies. The overall questionnaire (38 items) was simultaneously analyzed using the dichotomous response model. Parameter estimates, model-data fit and separation statistics were computed. The items of the NHP were additionally regrouped into two different scales: Physical (19 items) and Psychological (19 items). Separated Physical and Psychological parameter estimates were produced using the simultaneous item calibrations as anchor values. Misfitting items were deleted, resulting in a 22 item final short form (NHP22)-11 Physical and 11 Psychological-. The evaluation of the item hierarchies confirmed the construct validity of the new questionnaire. To demonstrate the invariance of the NHP22 item calibrations, Rasch analyses were performed separately for each study included in the sample and for several sociodemographic and health status variables. Results confirmed the validity of using the NHP22 item calibrations to measure different groups of people categorized by gender, clinical and health status.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 4","pages":"285-301"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20715774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of the present analysis was to determine the degree of invariance of the Job Responsibilities Scale from 1993 to 1995. Factor analyses were conducted on both year's data and nearly identical factor patterns were found. Rasch rating scale analyses were conducted and nearly identical pairs of item estimates were found. These results suggest that even though the overall frequency of performance on some medical technology laboratory tasks increased from 1993 to 1995, the relationships among the tasks themselves remained the same (invariant). This conclusion allows for a description of what it means to increase in level of personal job responsibility from year-to-year. In addition, these results suggest that at the conclusion of this prospective study it may be possible to objectively define the typical career mobility pattern of entry level medical technologists.
{"title":"The Job Responsibilities Scale: invariance in a longitudinal prospective study.","authors":"L H Ludlow, M E Lunz","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The purpose of the present analysis was to determine the degree of invariance of the Job Responsibilities Scale from 1993 to 1995. Factor analyses were conducted on both year's data and nearly identical factor patterns were found. Rasch rating scale analyses were conducted and nearly identical pairs of item estimates were found. These results suggest that even though the overall frequency of performance on some medical technology laboratory tasks increased from 1993 to 1995, the relationships among the tasks themselves remained the same (invariant). This conclusion allows for a description of what it means to increase in level of personal job responsibility from year-to-year. In addition, these results suggest that at the conclusion of this prospective study it may be possible to objectively define the typical career mobility pattern of entry level medical technologists.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 4","pages":"326-37"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20715778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Monte Carlo study was conducted using simulated dichotomous data to determine the effects of guessing on Rasch item fit statistics (weighted total, unweighted total, and unweighted between fit statistics) and the Logit Residual Index (LRI). The data were simulated using 100 items, 100 persons, three levels of guessing (0%, 25%, and 50%), and two item difficulty distributions (normal and uniform). The results of the study indicated that no significant differences were found between the mean Rasch item fit statistics for each distribution type as the probability of guessing the correct answer increased. The mean item scores differed significantly with uniformly distributed item difficulties, but not normally distributed item difficulties. The LRI was more sensitive to large positive item misfit values associated with the unweighted total fit statistic than to similar values associated with the weighted total fit or unweighted between fit statistics. The greatest magnitude of change in LRI values (negative) was observed when the unweighted total fit statistic had large positive values greater than 2.4. The LRI statistic was most useful in identifying the linear trend in the residuals for each item, thereby indicating differences in ability groups, i.e. differential item functioning.
{"title":"Identifying measurement disturbance effects using Rasch item fit statistics and the Logit Residual Index.","authors":"R E Mount, R E Schumacker","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A Monte Carlo study was conducted using simulated dichotomous data to determine the effects of guessing on Rasch item fit statistics (weighted total, unweighted total, and unweighted between fit statistics) and the Logit Residual Index (LRI). The data were simulated using 100 items, 100 persons, three levels of guessing (0%, 25%, and 50%), and two item difficulty distributions (normal and uniform). The results of the study indicated that no significant differences were found between the mean Rasch item fit statistics for each distribution type as the probability of guessing the correct answer increased. The mean item scores differed significantly with uniformly distributed item difficulties, but not normally distributed item difficulties. The LRI was more sensitive to large positive item misfit values associated with the unweighted total fit statistic than to similar values associated with the weighted total fit or unweighted between fit statistics. The greatest magnitude of change in LRI values (negative) was observed when the unweighted total fit statistic had large positive values greater than 2.4. The LRI statistic was most useful in identifying the linear trend in the residuals for each item, thereby indicating differences in ability groups, i.e. differential item functioning.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 4","pages":"338-50"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20715779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Functional Assessment Measure (FAM) has been proposed as a measure of disability in post-acute Traumatic Brain Injury (TBI) outpatients. It is comprised of the 18 items of The Functional Independence Measure (FIMSM), scored in terms of dependence, and of 12 newly designed items, scored in terms of dependence (7 items) or performance (5 items). The FIMSM covers the domains of self-care, sphincter management, mobility, locomotion, communication and social cognition. The 12 new items explore the domains of community integration, emotional status, orientation, attention, reading/writing skills, swallowing and speech intelligibility. By addressing a set of problems quite specific for TBI outpatients the FAM was intended to raise the ceiling of the FIMSM and to allow a more precise estimate of their disability. These claims, however, were never supported in previous studies. We administered the FAM to 60 TBI outpatient, 2-88 months (median 16) from trauma. Rasch analysis (rating scale model) was adopted to test the psychometric properties of the scale. The FAM was reliable (Rasch item and person reliability 0.91 and 0.93, respectively). Two of the 12 FAM-specific items were severely misfitting with the general construct, and were deleted. Within the 28-item refined FAM scale, 4 new items and 2 FIMSM items still retained signs of misfit. The FAM was on average too easy. The most difficult item (a new one, Employability) did not attain the average ability of the subjects. Also, it was only slightly more difficult than than the most difficult FIMSM item (Memory). The FAM does not seem to improve the FIMSM as a far as TBI outpatients are to be assessed.
{"title":"The functional assessment measure (FAM) in closed traumatic brain injury outpatients: a Rasch-based psychometric study.","authors":"L Tesio, A Cantagallo","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The Functional Assessment Measure (FAM) has been proposed as a measure of disability in post-acute Traumatic Brain Injury (TBI) outpatients. It is comprised of the 18 items of The Functional Independence Measure (FIMSM), scored in terms of dependence, and of 12 newly designed items, scored in terms of dependence (7 items) or performance (5 items). The FIMSM covers the domains of self-care, sphincter management, mobility, locomotion, communication and social cognition. The 12 new items explore the domains of community integration, emotional status, orientation, attention, reading/writing skills, swallowing and speech intelligibility. By addressing a set of problems quite specific for TBI outpatients the FAM was intended to raise the ceiling of the FIMSM and to allow a more precise estimate of their disability. These claims, however, were never supported in previous studies. We administered the FAM to 60 TBI outpatient, 2-88 months (median 16) from trauma. Rasch analysis (rating scale model) was adopted to test the psychometric properties of the scale. The FAM was reliable (Rasch item and person reliability 0.91 and 0.93, respectively). Two of the 12 FAM-specific items were severely misfitting with the general construct, and were deleted. Within the 28-item refined FAM scale, 4 new items and 2 FIMSM items still retained signs of misfit. The FAM was on average too easy. The most difficult item (a new one, Employability) did not attain the average ability of the subjects. Also, it was only slightly more difficult than than the most difficult FIMSM item (Memory). The FAM does not seem to improve the FIMSM as a far as TBI outpatients are to be assessed.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 2","pages":"79-96"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20580452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article addresses the relevance of probabilistic conjoint (Rasch) measurement to five issues of accountability and patient-centeredness in health care. Goals for research, data quality standards, and standard metrics are proposed. The article is intended to begin to address concerns voiced by health care researchers, policy analysts, and the public about ways in which health care outcome measures can be improved.
{"title":"A research program for accountable and patient-centered health outcome measures.","authors":"W P Fisher","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This article addresses the relevance of probabilistic conjoint (Rasch) measurement to five issues of accountability and patient-centeredness in health care. Goals for research, data quality standards, and standard metrics are proposed. The article is intended to begin to address concerns voiced by health care researchers, policy analysts, and the public about ways in which health care outcome measures can be improved.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 3","pages":"222-39"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20627035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a method for examining the precision of a computerized adaptive test with a limited item pool. Standard errors of measurement ascertained in the testing of simulees with a CAT using a restricted pool were compared to the results obtained in a live paper-and-pencil achievement testing of 4494 nursing students on four versions of an examination of calculations of drug administration. CAT measures of precision were considered when the simulated examine pools were uniform and normal. Precision indices were also considered in terms of the number of CAT items required to reach the precision of the traditional tests. Results suggest that regardless of the size of the item pool, CAT provides greater precision in measurement with a smaller number of items administered even when the choice of items is limited but fails to achieve equiprecision along the entire ability continuum.
{"title":"The effect of item pool restriction on the precision of ability measurement for a Rasch-based CAT: comparisons to traditional fixed length examinations.","authors":"P N Halkitis","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This paper describes a method for examining the precision of a computerized adaptive test with a limited item pool. Standard errors of measurement ascertained in the testing of simulees with a CAT using a restricted pool were compared to the results obtained in a live paper-and-pencil achievement testing of 4494 nursing students on four versions of an examination of calculations of drug administration. CAT measures of precision were considered when the simulated examine pools were uniform and normal. Precision indices were also considered in terms of the number of CAT items required to reach the precision of the traditional tests. Results suggest that regardless of the size of the item pool, CAT provides greater precision in measurement with a smaller number of items administered even when the choice of items is limited but fails to achieve equiprecision along the entire ability continuum.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 2","pages":"97-122"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20580453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The following study proposes a Rasch method to measure variables of nonadditive conjoint structures, where dichotomous response combinations are evaluated. In this framework, both the number of endorsed items and their latent positions are considered. This is different from the cumulative response process (measurable by the Rasch model), where the probability of a positive response to an item with measure delta iota is considered a monotonic increasing function of the person's measure beta nu. This is also unlike the unfolding framework, where the probability of a positive response is maximum when beta nu = delta iota, and monotonically decreases as magnitude of beta nu-delta iota approaches infinity. The method involves four steps. In Step 1, items are scaled by the Rasch model for paired comparisons to produce a variable definition. These scale values serve as a basis for Steps 2 and 4. In Step 2, the nonadditive conjoint system is restructured to additive. The quantitative hypothesis of the restructured data is tested by the axioms of conjoint measurement theory in Step 3. This data is then analyzed by the Rasch rating scale model in Step 4 to evaluate individual response combinations, using the Step 1 item calibrations as anchors. The method was applied to simulated person responses of the Schedule of Recent Events (Holmes and Rahe, 1967). The results suggest that the method is useful and effective. It scales items with a robust method of paired comparisons, ensures additivity and quantification of the conjoint person-item matrix, produces a reasonable ordering of person measures from the perspective of individual response combinations, and provides satisfactory person and item separation (i.e., reliability). Furthermore, the restructured data reproduces SRE item scale values obtained by paired comparisons in Step 1.
下面的研究提出了一种Rasch方法来测量非可加性连接结构的变量,其中二元响应组合进行了评估。在这一框架内,核可的项目数目及其潜在的立场都要加以考虑。这与累积反应过程(通过Rasch模型可测量)不同,在累积反应过程中,对测量值为delta iota的项目做出积极反应的概率被认为是一个单调的递增函数。这也不同于展开的框架,在展开的框架中,当β nu = δ iota时,正响应的概率是最大的,并且随着β nu- δ iota的大小接近无穷大而单调减少。该方法包括四个步骤。在步骤1中,通过Rasch模型对项目进行缩放,以进行配对比较,从而产生变量定义。这些比例值作为步骤2和步骤4的基础。第二步,将非加性联结系统重构为加性联结系统。在步骤3中,利用联合测量理论的公理对重构数据的定量假设进行检验。然后使用步骤1的项目校准作为锚点,通过步骤4中的Rasch评分量表模型分析这些数据以评估个人反应组合。将该方法应用于模拟人对近期事件表的反应(Holmes和Rahe, 1967)。结果表明,该方法是实用、有效的。它用一种稳健的配对比较方法来衡量项目,确保联合人-项目矩阵的可加性和量化性,从个体反应组合的角度产生合理的人测量顺序,并提供令人满意的人与项目分离(即可靠性)。此外,重组后的数据再现了步骤1中通过配对比较获得的SRE项目量表值。
{"title":"Analyzing nonadditive conjoint structures: compounding events by Rasch model probabilities.","authors":"G Karabatsos","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The following study proposes a Rasch method to measure variables of nonadditive conjoint structures, where dichotomous response combinations are evaluated. In this framework, both the number of endorsed items and their latent positions are considered. This is different from the cumulative response process (measurable by the Rasch model), where the probability of a positive response to an item with measure delta iota is considered a monotonic increasing function of the person's measure beta nu. This is also unlike the unfolding framework, where the probability of a positive response is maximum when beta nu = delta iota, and monotonically decreases as magnitude of beta nu-delta iota approaches infinity. The method involves four steps. In Step 1, items are scaled by the Rasch model for paired comparisons to produce a variable definition. These scale values serve as a basis for Steps 2 and 4. In Step 2, the nonadditive conjoint system is restructured to additive. The quantitative hypothesis of the restructured data is tested by the axioms of conjoint measurement theory in Step 3. This data is then analyzed by the Rasch rating scale model in Step 4 to evaluate individual response combinations, using the Step 1 item calibrations as anchors. The method was applied to simulated person responses of the Schedule of Recent Events (Holmes and Rahe, 1967). The results suggest that the method is useful and effective. It scales items with a robust method of paired comparisons, ensures additivity and quantification of the conjoint person-item matrix, produces a reasonable ordering of person measures from the perspective of individual response combinations, and provides satisfactory person and item separation (i.e., reliability). Furthermore, the restructured data reproduces SRE item scale values obtained by paired comparisons in Step 1.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 3","pages":"191-221"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20627034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
External locus of control has been implicated in the development of psychosocial problems in epilepsy, and adults with epilepsy exhibit scores that are more external than those of the normative sample of the Multidimensional Health Locus of Control (MHLC) scales. Although the MHLC scales has the potential to be quite useful in the assessment and treatment of adults with epilepsy, it has not been assessed psychometrically using data from persons with epilepsy. The present study examined the internal consistency, factor structure, and construct validity of the scales using data from a survey of 143 adults with epilepsy. Results from reliability analysis, confirmatory factor analysis, and Rasch analysis supported the hypothesized three-factor structure of the measure, which was internally reliable and factorially valid.
{"title":"Factor structure and dimensionality of the multidimensional health locus of control scales in measuring adults with epilepsy.","authors":"S Gehlert, C H Chang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>External locus of control has been implicated in the development of psychosocial problems in epilepsy, and adults with epilepsy exhibit scores that are more external than those of the normative sample of the Multidimensional Health Locus of Control (MHLC) scales. Although the MHLC scales has the potential to be quite useful in the assessment and treatment of adults with epilepsy, it has not been assessed psychometrically using data from persons with epilepsy. The present study examined the internal consistency, factor structure, and construct validity of the scales using data from a survey of 143 adults with epilepsy. Results from reliability analysis, confirmatory factor analysis, and Rasch analysis supported the hypothesized three-factor structure of the measure, which was internally reliable and factorially valid.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"2 3","pages":"173-90"},"PeriodicalIF":0.0,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"20627033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}