Pub Date : 2023-01-08DOI: 10.1177/07342829221149155
Dirk Lubbe, Pascale Stephanie Petri
Competition among individuals is a natural mode of determining who is fittest. While in nature, economics, and sports, it is common to infer ability or aptitude from the outcome of competitions, our knowledge on its effects in regard to psychological/educational assessment is scarce. In the present pilot study, we explore a measurement approach for assessing individual differences in interpersonal, face-to-face competitions, based on a set of cognitively demanding, competitive, fast-paced, two-opponent tasks. For initial task evaluation, we conducted comprehensive reliability and construct validation analyses, considering cognitive ability, motivation, and personality measures. Moreover, using structural equation models we conducted a simultaneous factorization of the tasks with the other validation measures. The results suggest that the newly developed tasks measure both cognitive ability (intelligence) as well as a competition-specific component. The competition-specific component was positively associated with experience in competitive gaming and negatively correlated with neuroticism. While the pattern of validities was promising, the measurements’ reliabilities were yet unsatisfactory. Implications for future research as well as the design of competition-based measurements are discussed.
{"title":"Cognitive Dyadic Measurements: A Game-Changer? Construction and First Validation of Three Cognitively Demanding Competitive Tasks","authors":"Dirk Lubbe, Pascale Stephanie Petri","doi":"10.1177/07342829221149155","DOIUrl":"https://doi.org/10.1177/07342829221149155","url":null,"abstract":"Competition among individuals is a natural mode of determining who is fittest. While in nature, economics, and sports, it is common to infer ability or aptitude from the outcome of competitions, our knowledge on its effects in regard to psychological/educational assessment is scarce. In the present pilot study, we explore a measurement approach for assessing individual differences in interpersonal, face-to-face competitions, based on a set of cognitively demanding, competitive, fast-paced, two-opponent tasks. For initial task evaluation, we conducted comprehensive reliability and construct validation analyses, considering cognitive ability, motivation, and personality measures. Moreover, using structural equation models we conducted a simultaneous factorization of the tasks with the other validation measures. The results suggest that the newly developed tasks measure both cognitive ability (intelligence) as well as a competition-specific component. The competition-specific component was positively associated with experience in competitive gaming and negatively correlated with neuroticism. While the pattern of validities was promising, the measurements’ reliabilities were yet unsatisfactory. Implications for future research as well as the design of competition-based measurements are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43494934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-06DOI: 10.1177/07342829221149149
Jianzhong Xu
We investigated the psychometric properties of the Homework Expectancy Value Cost Scale (HEVCS), using 1,072 Chinese students in Grades 7–8. Results from confirmatory factor analyses (CFA) indicated that the HEVCS included three factors: Homework Expectancy, Homework Value, and Homework Cost. Additionally, no latent mean differences were found across gender and grade level. Furthermore, the HEVCS had adequate to very good reliability estimates. Finally, congruent with theoretical predictions, Homework Expectancy and Homework Value were related positively to homework effort, completion, and mathematics achievement, and negatively to homework procrastination. Homework Cost was related negatively to homework effort, completion, and mathematics achievement, and positively to homework procrastination. Our investigation provides compelling evidence that the HEVCS is a valid scale for assessing homework motivational beliefs.
{"title":"Homework Expectancy Value Cost Scale for Middle School Students: A Validation Study","authors":"Jianzhong Xu","doi":"10.1177/07342829221149149","DOIUrl":"https://doi.org/10.1177/07342829221149149","url":null,"abstract":"We investigated the psychometric properties of the Homework Expectancy Value Cost Scale (HEVCS), using 1,072 Chinese students in Grades 7–8. Results from confirmatory factor analyses (CFA) indicated that the HEVCS included three factors: Homework Expectancy, Homework Value, and Homework Cost. Additionally, no latent mean differences were found across gender and grade level. Furthermore, the HEVCS had adequate to very good reliability estimates. Finally, congruent with theoretical predictions, Homework Expectancy and Homework Value were related positively to homework effort, completion, and mathematics achievement, and negatively to homework procrastination. Homework Cost was related negatively to homework effort, completion, and mathematics achievement, and positively to homework procrastination. Our investigation provides compelling evidence that the HEVCS is a valid scale for assessing homework motivational beliefs.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44240675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1177/07342829221149323
G. Shergill, Hailey Camozzi, Meagan D. O’Malley, Arlene Ortiz
The Comprehensive Test of Phonological Processing, 2nd Edition (CTOPP-2; Wagner et al., 2013) is commonly used in k-12 public schools to assess basic cognitive processing skills foundational for reading achievement. Psychometric support for its use with dual language learners (DLLs), a group representing over 10% of the school-aged population in the United States, is critical. This study tested the metric and scaler invariance of CTOPP-2 scores among school-aged children (n = 242; 41.3% Spanish-speaking DLL). Results indicate that the CTOPP-2’s three-factor (i.e., Phonological Awareness, Phonological Memory, and Rapid Automatic Naming) measurement structure displays metric and scalar invariance for DLLs. Model fit was improved when the Phonological Awareness and Phonological Memory factors were combined. Implications for future research and the practice of psychoeducational diagnostic assessment with DLLs are discussed.
{"title":"The Comprehensive Test of Phonological Processing, Second Edition: Measurement Invariance for Dual Language Learners","authors":"G. Shergill, Hailey Camozzi, Meagan D. O’Malley, Arlene Ortiz","doi":"10.1177/07342829221149323","DOIUrl":"https://doi.org/10.1177/07342829221149323","url":null,"abstract":"The Comprehensive Test of Phonological Processing, 2nd Edition (CTOPP-2; Wagner et al., 2013) is commonly used in k-12 public schools to assess basic cognitive processing skills foundational for reading achievement. Psychometric support for its use with dual language learners (DLLs), a group representing over 10% of the school-aged population in the United States, is critical. This study tested the metric and scaler invariance of CTOPP-2 scores among school-aged children (n = 242; 41.3% Spanish-speaking DLL). Results indicate that the CTOPP-2’s three-factor (i.e., Phonological Awareness, Phonological Memory, and Rapid Automatic Naming) measurement structure displays metric and scalar invariance for DLLs. Model fit was improved when the Phonological Awareness and Phonological Memory factors were combined. Implications for future research and the practice of psychoeducational diagnostic assessment with DLLs are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41952924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-13DOI: 10.1177/07342829221144717
Pui‐wa Lei, Hui Zhao, S. Hart, Xinyue Li, J. DiPerna
Efficient and intuitive interpretive frameworks for social-emotional learning (SEL) measures are necessary for identifying student needs and informing programming decisions across multitiered systems of support in schools. Though familiar to educators and often used with standardized tests of academic achievement, criterion-referenced frameworks are less common in SEL assessment. As such, the current study examined the psychometric evidence for scores from one such framework, the Competency-Referenced Performance Framework, which was developed to inform universal screening decisions based on the SSIS SEL Brief Scales (Elliott et al., 2020). Specifically, we evaluated stability, test-criterion relationships with academic outcomes, and treatment sensitivity of the CRPF using data from an efficacy trial of a universal SEL program. Results provided preliminary supportive evidence for the CRPF.
{"title":"Examination of Psychometric Evidence for Criterion-Referenced Scores from the SSIS SEL Brief Scales","authors":"Pui‐wa Lei, Hui Zhao, S. Hart, Xinyue Li, J. DiPerna","doi":"10.1177/07342829221144717","DOIUrl":"https://doi.org/10.1177/07342829221144717","url":null,"abstract":"Efficient and intuitive interpretive frameworks for social-emotional learning (SEL) measures are necessary for identifying student needs and informing programming decisions across multitiered systems of support in schools. Though familiar to educators and often used with standardized tests of academic achievement, criterion-referenced frameworks are less common in SEL assessment. As such, the current study examined the psychometric evidence for scores from one such framework, the Competency-Referenced Performance Framework, which was developed to inform universal screening decisions based on the SSIS SEL Brief Scales (Elliott et al., 2020). Specifically, we evaluated stability, test-criterion relationships with academic outcomes, and treatment sensitivity of the CRPF using data from an efficacy trial of a universal SEL program. Results provided preliminary supportive evidence for the CRPF.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48959870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-13DOI: 10.1177/07342829221144868
F. Peixoto, J. Radišić, Ksenija Krstić, Kajsa Yang Hansen, A. Laine, A. Baucal, Maarja Sõrmus, Lourdes Mata
Grounded in ‘expectancy-value’ theory, this paper reports on the psychometric properties of an instrument intended to measure students’ motivation in mathematics. The participants were 2045 third-, fourth- and fifth-grade students from Estonia, Finland, Norway, Portugal, Serbia and Sweden. The Expectancy-Value Scale (EVS) was found to be suitable for early grades of primary education in measuring competence self-perceptions and subjective task values relative to the mathematics field. The results indicate a good model fit aligned with the expectancy-value theory. The EVS dimensions showed good reliability, and scalar invariance was established. However, findings also indicated high correlations between some of the EVS dimensions, which is well documented for students at this age. The findings are discussed relative to the ‘expectancy-value’ theory framework and students’ age.
{"title":"Contribution to the Validation of the Expectancy-Value Scale for Primary School Students","authors":"F. Peixoto, J. Radišić, Ksenija Krstić, Kajsa Yang Hansen, A. Laine, A. Baucal, Maarja Sõrmus, Lourdes Mata","doi":"10.1177/07342829221144868","DOIUrl":"https://doi.org/10.1177/07342829221144868","url":null,"abstract":"Grounded in ‘expectancy-value’ theory, this paper reports on the psychometric properties of an instrument intended to measure students’ motivation in mathematics. The participants were 2045 third-, fourth- and fifth-grade students from Estonia, Finland, Norway, Portugal, Serbia and Sweden. The Expectancy-Value Scale (EVS) was found to be suitable for early grades of primary education in measuring competence self-perceptions and subjective task values relative to the mathematics field. The results indicate a good model fit aligned with the expectancy-value theory. The EVS dimensions showed good reliability, and scalar invariance was established. However, findings also indicated high correlations between some of the EVS dimensions, which is well documented for students at this age. The findings are discussed relative to the ‘expectancy-value’ theory framework and students’ age.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44938063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-08DOI: 10.1177/07342829221143208
Janina Eberhart, Andrew E. Koepp, S. Howard, R. Kok, D. McCoy, Sara T. Baker
Self-regulation is crucial for children’s development and learning. Almost by convention, it is assumed that self-regulation is a relatively stable skill, and little is known about its dynamic nature and context dependency. Traditional measurement approaches such as single direct assessments and adult reports are not well suited to address questions around variations of self-regulation within individuals and influences from social-contextual factors. Measures relying on child observations are uniquely positioned to address these questions and to advance the field by shedding light on self-regulatory variability and incremental growth. In this paper, we review traditional measurement approaches (direct assessments and adult reports) and recently developed observational measures. We discuss which questions observational measures are best suited to address and why traditional measurement approaches fall short. Finally, we share lessons learned based on our experiences using child observations in educational settings and discuss how measurement approaches should be carefully aligned to the research questions.
{"title":"Advancing Educational Research on Children’s Self-Regulation With Observational Measures","authors":"Janina Eberhart, Andrew E. Koepp, S. Howard, R. Kok, D. McCoy, Sara T. Baker","doi":"10.1177/07342829221143208","DOIUrl":"https://doi.org/10.1177/07342829221143208","url":null,"abstract":"Self-regulation is crucial for children’s development and learning. Almost by convention, it is assumed that self-regulation is a relatively stable skill, and little is known about its dynamic nature and context dependency. Traditional measurement approaches such as single direct assessments and adult reports are not well suited to address questions around variations of self-regulation within individuals and influences from social-contextual factors. Measures relying on child observations are uniquely positioned to address these questions and to advance the field by shedding light on self-regulatory variability and incremental growth. In this paper, we review traditional measurement approaches (direct assessments and adult reports) and recently developed observational measures. We discuss which questions observational measures are best suited to address and why traditional measurement approaches fall short. Finally, we share lessons learned based on our experiences using child observations in educational settings and discuss how measurement approaches should be carefully aligned to the research questions.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48486257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-06DOI: 10.1177/07342829221143417
Yagmur Seven, R. Dedrick, Keri M. Madsen, T. Spencer, E. Kelley, Howard Goldstein
This study investigated the psychometric properties of the Preschool Language, Literacy, and Behavior Screener (PLLB-S). We examined and tested the factor structure of the PLLB-S using exploratory and confirmatory factor analyses. We further conducted internal consistency, concurrent validity, and predictive validity analyses and evaluated teacher satisfaction using PLLB-S. Our factor analyses resulted in 22 items distributed among three subscales with high internal consistency: Oral language, emergent literacy, and behavior skills. The PLLB-S and its subscales correlated moderately to strongly with standardized measures. The emergent literacy of the PLLB-S was the only subscale that significantly predicted children’s later vocabulary knowledge. Preschool teachers reported high satisfaction with the content and purpose of the questionnaire. We concluded that this tool with sound psychometric properties can potentially help increase the feasibility and efficiency of implementing standardized assessments in MTSS frameworks in preschool classrooms.
{"title":"Psychometric Properties of a Preschool Language, Literacy, and Behavior Screener","authors":"Yagmur Seven, R. Dedrick, Keri M. Madsen, T. Spencer, E. Kelley, Howard Goldstein","doi":"10.1177/07342829221143417","DOIUrl":"https://doi.org/10.1177/07342829221143417","url":null,"abstract":"This study investigated the psychometric properties of the Preschool Language, Literacy, and Behavior Screener (PLLB-S). We examined and tested the factor structure of the PLLB-S using exploratory and confirmatory factor analyses. We further conducted internal consistency, concurrent validity, and predictive validity analyses and evaluated teacher satisfaction using PLLB-S. Our factor analyses resulted in 22 items distributed among three subscales with high internal consistency: Oral language, emergent literacy, and behavior skills. The PLLB-S and its subscales correlated moderately to strongly with standardized measures. The emergent literacy of the PLLB-S was the only subscale that significantly predicted children’s later vocabulary knowledge. Preschool teachers reported high satisfaction with the content and purpose of the questionnaire. We concluded that this tool with sound psychometric properties can potentially help increase the feasibility and efficiency of implementing standardized assessments in MTSS frameworks in preschool classrooms.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42364179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-25DOI: 10.1177/07342829221140957
D. Pastor, Chris R. Patterson, S. Finney
In low-stakes testing contexts, there are minimal personal consequences associated with examinee performance. Examples include assessments administered for research, program evaluation, test development, and international comparisons (e.g., Programme for International Student Assessment [PISA]). Because test-taking motivation can suffer in low-stakes conditions, the Student Opinion Scale (SOS) is commonly administered to measure test-taking effort and how personally important the examinee feels it is to do well on the test. Although popular, studies of the scale’s internal validity yield conflicting findings. The present study critically evaluates the creation of the SOS and considers its factor structure across six samples of college students differing in their college experience level and version of the SOS administered. Because findings only support the internal validity of the effort subscale, further study and development of the importance subscale is recommended.
{"title":"Development and Internal Validity of the Student Opinion Scale: A Measure of Test-Taking Motivation","authors":"D. Pastor, Chris R. Patterson, S. Finney","doi":"10.1177/07342829221140957","DOIUrl":"https://doi.org/10.1177/07342829221140957","url":null,"abstract":"In low-stakes testing contexts, there are minimal personal consequences associated with examinee performance. Examples include assessments administered for research, program evaluation, test development, and international comparisons (e.g., Programme for International Student Assessment [PISA]). Because test-taking motivation can suffer in low-stakes conditions, the Student Opinion Scale (SOS) is commonly administered to measure test-taking effort and how personally important the examinee feels it is to do well on the test. Although popular, studies of the scale’s internal validity yield conflicting findings. The present study critically evaluates the creation of the SOS and considers its factor structure across six samples of college students differing in their college experience level and version of the SOS administered. Because findings only support the internal validity of the effort subscale, further study and development of the importance subscale is recommended.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41831774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-24DOI: 10.1177/07342829221141512
Jorge J. Varela, R. Melipillán, Amy L. Reschly, Ana Maria Squicciarini Navarro, Felipe Peña Quintanilla, Paola Sánchez Campos
Student engagement is associated with various aspects of students’ school experiences, including student achievement, high school completion, and post-secondary success. As measurement of student engagement has grown in countries around the world, few studies have been conducted in South America. This study examined a translated version of the Student Engagement Instrument, widely used in the U.S. and other countries, in a study of 2337 adolescents in Chile. Consistent with prior research, confirmatory factor analyses revealed a six-factor solution as the best fit for the data. However, fewer items were retained than on the studies of the SEI with students in the U.S. The Future Goals and Aspirations and Extrinsic motivation subscales were associated, in expected directions, with achievement 1 year later.
{"title":"Cross-Cultural Validation of the Student Engagement Instrument for Chilean Students","authors":"Jorge J. Varela, R. Melipillán, Amy L. Reschly, Ana Maria Squicciarini Navarro, Felipe Peña Quintanilla, Paola Sánchez Campos","doi":"10.1177/07342829221141512","DOIUrl":"https://doi.org/10.1177/07342829221141512","url":null,"abstract":"Student engagement is associated with various aspects of students’ school experiences, including student achievement, high school completion, and post-secondary success. As measurement of student engagement has grown in countries around the world, few studies have been conducted in South America. This study examined a translated version of the Student Engagement Instrument, widely used in the U.S. and other countries, in a study of 2337 adolescents in Chile. Consistent with prior research, confirmatory factor analyses revealed a six-factor solution as the best fit for the data. However, fewer items were retained than on the studies of the SEI with students in the U.S. The Future Goals and Aspirations and Extrinsic motivation subscales were associated, in expected directions, with achievement 1 year later.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44414403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-23DOI: 10.1177/07342829221141934
H. Dodeen
In survey measurement, acquiescence bias is a response effect that occurs when respondents agree to the item or the question in the scale regardless of its content. It is assumed that negative items force participants not to agree with some items. Using the mixture approach, however, is not without a substantial cost on both the structure and the scale psychometric properties. The effects of including negative items in scales is what this study tried to investigate. Therefore, the aim of the study is to empirically evaluate the effects of changing negative items to their equivalent positively worded items on the reliability and the factor structure of psychological scales. It is hypothesized that this approach improves the scale factors structures and reliability. Seven commonly used psychological scales that have both negatively and positively worded items have been selected. The scales were applied on seven different samples with a total number of 4192 participants from a public university in the United Arab Emirates. The results confirmed that changing negative items to their equivalent positively directed items systematically and significantly increased the reliability values as well as improved the factor structure of psychological scales.
{"title":"The Effects of Changing Negatively Worded Items to Positively Worded Items on the Reliability and the Factor Structure of Psychological Scales","authors":"H. Dodeen","doi":"10.1177/07342829221141934","DOIUrl":"https://doi.org/10.1177/07342829221141934","url":null,"abstract":"In survey measurement, acquiescence bias is a response effect that occurs when respondents agree to the item or the question in the scale regardless of its content. It is assumed that negative items force participants not to agree with some items. Using the mixture approach, however, is not without a substantial cost on both the structure and the scale psychometric properties. The effects of including negative items in scales is what this study tried to investigate. Therefore, the aim of the study is to empirically evaluate the effects of changing negative items to their equivalent positively worded items on the reliability and the factor structure of psychological scales. It is hypothesized that this approach improves the scale factors structures and reliability. Seven commonly used psychological scales that have both negatively and positively worded items have been selected. The scales were applied on seven different samples with a total number of 4192 participants from a public university in the United Arab Emirates. The results confirmed that changing negative items to their equivalent positively directed items systematically and significantly increased the reliability values as well as improved the factor structure of psychological scales.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45346886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}