Pub Date : 2023-10-01DOI: 10.1177/07342829231178874
Stephan Daus, Siv-Elisabeth Skjelbred, Cathrine Pedersen
To improve the understanding of the drivers of interest, and its impact on other outcomes, researchers and educators need valid and informative measures capturing the different domains of interest. Answering the lack of interest measures in marketing education, we develop and psychometrically assess three instruments reflecting the theoretical notions of situational and individual interest: course interest, contents interest, and job interest. Drawing on a relatively large sample of Norwegian upper-secondary marketing classes (Nclasses = 22; Nstudents = 433), initial psychometric validation showed that each instrument has good unidimensionality, local item independence, measurement precision across the latent scales, and invariance across instructional approaches, gender, and parental education level. Furthermore, the interest instruments are related but distinct from each other and provide different information than measures of perceptions and achievement. We conclude this first steppingstone by showing the instruments’ information value and discussing future paths for strengthening the validity evidence.
{"title":"Initial Validation of Measures for Interest in Marketing Education","authors":"Stephan Daus, Siv-Elisabeth Skjelbred, Cathrine Pedersen","doi":"10.1177/07342829231178874","DOIUrl":"https://doi.org/10.1177/07342829231178874","url":null,"abstract":"To improve the understanding of the drivers of interest, and its impact on other outcomes, researchers and educators need valid and informative measures capturing the different domains of interest. Answering the lack of interest measures in marketing education, we develop and psychometrically assess three instruments reflecting the theoretical notions of situational and individual interest: course interest, contents interest, and job interest. Drawing on a relatively large sample of Norwegian upper-secondary marketing classes (Nclasses = 22; Nstudents = 433), initial psychometric validation showed that each instrument has good unidimensionality, local item independence, measurement precision across the latent scales, and invariance across instructional approaches, gender, and parental education level. Furthermore, the interest instruments are related but distinct from each other and provide different information than measures of perceptions and achievement. We conclude this first steppingstone by showing the instruments’ information value and discussing future paths for strengthening the validity evidence.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42220057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-29DOI: 10.1177/07342829231204507
Eli A. Jones, Justine Piontek, Luke C. Walden, Leigh M. Harrell-Williams
Research self-efficacy is a key component of college students’ career development. This study’s purpose was to develop and begin to construct a validity argument for scores from the Sources of Research Self-Efficacy (SRSE) scale in college students. We explored validity evidence for SRSE scores from 719 undergraduate and graduate students based on test content, response processes, internal structure, relations to related variables, and consequences of testing. We present evidence from our development process for test content and response processes. Our statistical analyses suggest that a 20-item four-factor model is appropriate, with subscales representing Mastery Experiences, Vicarious Experiences, Social Persuasion, and Negative Emotional States. Subscale scores showed good internal consistency and correlated with both global research self-efficacy and research outcome expectancy scores. The SRSE shows promise as a measure of the various learning experiences that lead to students’ research self-efficacy in university settings.
{"title":"Development and Validation of the Sources of Research Self-Efficacy Scale","authors":"Eli A. Jones, Justine Piontek, Luke C. Walden, Leigh M. Harrell-Williams","doi":"10.1177/07342829231204507","DOIUrl":"https://doi.org/10.1177/07342829231204507","url":null,"abstract":"Research self-efficacy is a key component of college students’ career development. This study’s purpose was to develop and begin to construct a validity argument for scores from the Sources of Research Self-Efficacy (SRSE) scale in college students. We explored validity evidence for SRSE scores from 719 undergraduate and graduate students based on test content, response processes, internal structure, relations to related variables, and consequences of testing. We present evidence from our development process for test content and response processes. Our statistical analyses suggest that a 20-item four-factor model is appropriate, with subscales representing Mastery Experiences, Vicarious Experiences, Social Persuasion, and Negative Emotional States. Subscale scores showed good internal consistency and correlated with both global research self-efficacy and research outcome expectancy scores. The SRSE shows promise as a measure of the various learning experiences that lead to students’ research self-efficacy in university settings.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135193204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1177/07342829231202313
Rebecca J. Collie
This study involved examining the psychometric properties of the Tripartite Occupational Well-Being Scale (TOWBS) among a sample of 502 Australian teachers. The TOWBS (12 items) comprises three factors of subjective vitality, behavioral engagement, and professional growth. The TOWBS – Short (3 items) assesses a broad factor of occupational well-being. Results confirmed the reliability, factor structure, and longitudinal measurement invariance of the scale scores for both scales. In addition, the two forms of the scale functioned similarly across different teacher characteristics, and the well-being factors were demonstrated to be associated with four external correlates in plausible ways (workplace buoyancy, psychological detachment, somatic burden, emotional exhaustion). Combined, findings offer support for the scale as an assessment of teacher well-being. Implications for research and practice are discussed.
{"title":"The Tripartite Occupational Well-Being Scale: Evidence of Validity Among Teachers","authors":"Rebecca J. Collie","doi":"10.1177/07342829231202313","DOIUrl":"https://doi.org/10.1177/07342829231202313","url":null,"abstract":"This study involved examining the psychometric properties of the Tripartite Occupational Well-Being Scale (TOWBS) among a sample of 502 Australian teachers. The TOWBS (12 items) comprises three factors of subjective vitality, behavioral engagement, and professional growth. The TOWBS – Short (3 items) assesses a broad factor of occupational well-being. Results confirmed the reliability, factor structure, and longitudinal measurement invariance of the scale scores for both scales. In addition, the two forms of the scale functioned similarly across different teacher characteristics, and the well-being factors were demonstrated to be associated with four external correlates in plausible ways (workplace buoyancy, psychological detachment, somatic burden, emotional exhaustion). Combined, findings offer support for the scale as an assessment of teacher well-being. Implications for research and practice are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-07DOI: 10.1177/07342829231199007
Stephen M. Humphry, Paul Montuoro, Carolyn Maxwell
This article builds upon a proiminent definition of construct validity that focuses on variation in attributes causing variation in measurement outcomes. This article synthesizes the defintion and uses Rasch measurement modeling to explicate a modified conceptualization of construct validity for assessments of developmental attributes. If attributes are conceived as developmental, hypotheses about how new knowledge builds cumulatively upon the cognitive capacity afforded by prior knowledge can be developed. This cumulative ordering of knowledge required to accomplish test items constitutes evidence of a specific form of construct validity. Examples of cumulative ordering appear in the extant literature, but they are rare and confined to the early literature. Furthermore, cumulative ordering has never been explicated, especially its relationship to construct validity. This article describes three of the most complete examples of cumulative ordering in the literature. These examples are used to synthesize a method for assessing cumulative ordering, in which the Rasch model is used to assess the progression of item difficulties which are, in turn, used to review developmental theories and hypotheses, and the tests themselves. We discuss how this conceptualization of construct validity can lead to a more direct relationship between developmental theories and tests which, for practitioners, should result in a clearer understanding of what tests results actually mean. Finally, we discuss how cumulative ordering can be used to facilitate decisions about consequential validity.
{"title":"Cumulative Ordering as Evidence of Construct Validity for Assessments of Developmental Attributes","authors":"Stephen M. Humphry, Paul Montuoro, Carolyn Maxwell","doi":"10.1177/07342829231199007","DOIUrl":"https://doi.org/10.1177/07342829231199007","url":null,"abstract":"This article builds upon a proiminent definition of construct validity that focuses on variation in attributes causing variation in measurement outcomes. This article synthesizes the defintion and uses Rasch measurement modeling to explicate a modified conceptualization of construct validity for assessments of developmental attributes. If attributes are conceived as developmental, hypotheses about how new knowledge builds cumulatively upon the cognitive capacity afforded by prior knowledge can be developed. This cumulative ordering of knowledge required to accomplish test items constitutes evidence of a specific form of construct validity. Examples of cumulative ordering appear in the extant literature, but they are rare and confined to the early literature. Furthermore, cumulative ordering has never been explicated, especially its relationship to construct validity. This article describes three of the most complete examples of cumulative ordering in the literature. These examples are used to synthesize a method for assessing cumulative ordering, in which the Rasch model is used to assess the progression of item difficulties which are, in turn, used to review developmental theories and hypotheses, and the tests themselves. We discuss how this conceptualization of construct validity can lead to a more direct relationship between developmental theories and tests which, for practitioners, should result in a clearer understanding of what tests results actually mean. Finally, we discuss how cumulative ordering can be used to facilitate decisions about consequential validity.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42192450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/07342829231198277
Stephen L. Wright, Michael A. Jenkins-Guarnieri
The current study sought out to advance the Social Self-Efficacy and Social Outcome Expectations scale using multiple approaches to scale development. Data from 583 undergraduate students were used in two scale development approaches: Classic Test Theory (CTT) and Item Response Theory (IRT). Confirmatory factor analysis suggested a 2-factor structure that aligns with the theoretically based domains for SEOES items and supports previously proposed models of this scale from CTT and psychometric analyses. The IRT analysis indicated that the SEOES items have greater measurement precision at measuring lower levels of the latent constructs. Future research directions are provided and practice implications are discussed.
{"title":"Further Validation of the Social Efficacy and Social Outcome Expectations Scale","authors":"Stephen L. Wright, Michael A. Jenkins-Guarnieri","doi":"10.1177/07342829231198277","DOIUrl":"https://doi.org/10.1177/07342829231198277","url":null,"abstract":"The current study sought out to advance the Social Self-Efficacy and Social Outcome Expectations scale using multiple approaches to scale development. Data from 583 undergraduate students were used in two scale development approaches: Classic Test Theory (CTT) and Item Response Theory (IRT). Confirmatory factor analysis suggested a 2-factor structure that aligns with the theoretically based domains for SEOES items and supports previously proposed models of this scale from CTT and psychometric analyses. The IRT analysis indicated that the SEOES items have greater measurement precision at measuring lower levels of the latent constructs. Future research directions are provided and practice implications are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45401363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/07342829221095653
P. Lowe
The present study examined whether cultural differences in different dimensions of perfectionism exist and whether different dimensions of perfectionism (i.e., rigid and self-critical perfectionism) predicted different dimensions of test anxiety while controlling for depression in a sample of Canadian and Singapore higher education students. In addition, culture was examined to determine whether it served as a moderator variable in the relationship between different dimensions of perfectionism and different dimensions of test anxiety. The present study was grounded in DiBartolo and Rendón’s cross-cultural framework for conducting intra- and intercultural research in the area of perfectionism. The sample for the study included 1,095 undergraduate students. Perfectionism, test anxiety, and depression measures were administered to the students online. The results of mean and covariance analyses found the perfectionism measure was invariant across Canadian and Singapore students. In addition, the results of latent mean analyses found no significant differences on the different dimensions of perfectionism between Canadian and Singapore students. The results of analyses of variance also found no significant differences in different ethnic groups on the different dimensions of perfectionism in Canada and Singapore. Furthermore, the results of five hierarchical regression analyses found self-critical perfectionism explained unique variance in the five different test anxiety dimensions while controlling for depression, and culture did not serve as a moderator variable in the relationship between the different dimensions of perfectionism and test anxiety. Implications of the findings are discussed.
{"title":"Do Dimensions of Perfectionism Predict Dimensions of Test Anxiety While Controlling for Depression?","authors":"P. Lowe","doi":"10.1177/07342829221095653","DOIUrl":"https://doi.org/10.1177/07342829221095653","url":null,"abstract":"The present study examined whether cultural differences in different dimensions of perfectionism exist and whether different dimensions of perfectionism (i.e., rigid and self-critical perfectionism) predicted different dimensions of test anxiety while controlling for depression in a sample of Canadian and Singapore higher education students. In addition, culture was examined to determine whether it served as a moderator variable in the relationship between different dimensions of perfectionism and different dimensions of test anxiety. The present study was grounded in DiBartolo and Rendón’s cross-cultural framework for conducting intra- and intercultural research in the area of perfectionism. The sample for the study included 1,095 undergraduate students. Perfectionism, test anxiety, and depression measures were administered to the students online. The results of mean and covariance analyses found the perfectionism measure was invariant across Canadian and Singapore students. In addition, the results of latent mean analyses found no significant differences on the different dimensions of perfectionism between Canadian and Singapore students. The results of analyses of variance also found no significant differences in different ethnic groups on the different dimensions of perfectionism in Canada and Singapore. Furthermore, the results of five hierarchical regression analyses found self-critical perfectionism explained unique variance in the five different test anxiety dimensions while controlling for depression, and culture did not serve as a moderator variable in the relationship between the different dimensions of perfectionism and test anxiety. Implications of the findings are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49052809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-29DOI: 10.1177/07342829231199314
Kayleigh O’Donnell, Amy L. Reschly, James J. Appleton
Research suggests the need to assess both positive and negative forms of student engagement. The purpose of this study was to pilot disaffection items with the Student Engagement Instrument (SEI) with a sample of middle school students from a rural area in the Southeastern U.S. This study explored the factor structure of the piloted items alongside the SEI, measurement invariance, and associations between student engagement and disaffection with educational outcomes such as mathematics and reading test scores, discipline referrals, and absences. Results hold implications for our theoretical understanding of engagement, suggesting that engagement and disaffection dimensions are theoretically and psychometrically distinct.
{"title":"Assessment of Engagement and Disaffection With the Student Engagement Instrument","authors":"Kayleigh O’Donnell, Amy L. Reschly, James J. Appleton","doi":"10.1177/07342829231199314","DOIUrl":"https://doi.org/10.1177/07342829231199314","url":null,"abstract":"Research suggests the need to assess both positive and negative forms of student engagement. The purpose of this study was to pilot disaffection items with the Student Engagement Instrument (SEI) with a sample of middle school students from a rural area in the Southeastern U.S. This study explored the factor structure of the piloted items alongside the SEI, measurement invariance, and associations between student engagement and disaffection with educational outcomes such as mathematics and reading test scores, discipline referrals, and absences. Results hold implications for our theoretical understanding of engagement, suggesting that engagement and disaffection dimensions are theoretically and psychometrically distinct.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42875913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-23DOI: 10.1177/07342829231197460
Erdem Karataş, Murat Özdemir
This study aimed to adapt Teacher Academic Optimism Scale-Secondary (TAOS-S) to Turkish culture. A total of 453 public school teachers in Turkey participated in the study. We examined the validity, reliability, and measurement invariance of the scale across school levels. The results indicated good internal consistency of the TAOS, suggesting a good measure to assess teacher academic optimism. Confirmatory factor analysis revealed that the three-dimensional individual teacher academic optimism construct showed a good fit with strong reliability evidence. Multigroup confirmatory factor analysis results indicated both configural and metric invariance was observed across school levels; however, scalar invariance was only partially confirmed. Overall, our results show the TAOS has sound psychometric properties, is culturally and linguistically acceptable, and is equally effective in assessing the academic optimism of Turkish teachers.
{"title":"Psychometric Properties of the Turkish Version of the Teacher Academic Optimism Scale","authors":"Erdem Karataş, Murat Özdemir","doi":"10.1177/07342829231197460","DOIUrl":"https://doi.org/10.1177/07342829231197460","url":null,"abstract":"This study aimed to adapt Teacher Academic Optimism Scale-Secondary (TAOS-S) to Turkish culture. A total of 453 public school teachers in Turkey participated in the study. We examined the validity, reliability, and measurement invariance of the scale across school levels. The results indicated good internal consistency of the TAOS, suggesting a good measure to assess teacher academic optimism. Confirmatory factor analysis revealed that the three-dimensional individual teacher academic optimism construct showed a good fit with strong reliability evidence. Multigroup confirmatory factor analysis results indicated both configural and metric invariance was observed across school levels; however, scalar invariance was only partially confirmed. Overall, our results show the TAOS has sound psychometric properties, is culturally and linguistically acceptable, and is equally effective in assessing the academic optimism of Turkish teachers.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135520511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-16DOI: 10.1177/07342829231196199
Jason R. Parkin, Daniel B. Hajovsky, V. Alfonso
Although phonemic awareness is an essential skill in learning to decode written words, practitioners may question which phonemic awareness tasks best operationalize their relationship with orthographic mapping, the process that converts a decoded word into one instantly recognized on sight. Tests from the Woodcock– Johnson IV were used to evaluate the effects of phonemic awareness tasks and vocabulary on measures of pseudoword decoding, word reading, and spelling in three age groups (ages 6 to 8, 9 to 13, and 14 to 19 years) within the WJ IV normative sample ( N = 4082). Results from path analysis indicated the effects of phonemic awareness tasks and vocabulary varied depending on age and reading task type with mixed results based on theoretical expectations. Results from the 9 to 13 age group appeared closest to conforming to hypotheses. We discuss implications of measuring phonemic awareness, reading, and spelling in the context of comprehensive psychoeducational assessment.
{"title":"Evaluating Phonemic Awareness and Orthographic Mapping With the Woodcock–Johnson IV","authors":"Jason R. Parkin, Daniel B. Hajovsky, V. Alfonso","doi":"10.1177/07342829231196199","DOIUrl":"https://doi.org/10.1177/07342829231196199","url":null,"abstract":"Although phonemic awareness is an essential skill in learning to decode written words, practitioners may question which phonemic awareness tasks best operationalize their relationship with orthographic mapping, the process that converts a decoded word into one instantly recognized on sight. Tests from the Woodcock– Johnson IV were used to evaluate the effects of phonemic awareness tasks and vocabulary on measures of pseudoword decoding, word reading, and spelling in three age groups (ages 6 to 8, 9 to 13, and 14 to 19 years) within the WJ IV normative sample ( N = 4082). Results from path analysis indicated the effects of phonemic awareness tasks and vocabulary varied depending on age and reading task type with mixed results based on theoretical expectations. Results from the 9 to 13 age group appeared closest to conforming to hypotheses. We discuss implications of measuring phonemic awareness, reading, and spelling in the context of comprehensive psychoeducational assessment.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41814013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.1177/07342829231193064
Zhengdong Gan, Zheng Yuan, Randall Schumacker
This study examined the psychometric properties of the Motivational scale of the Motivated Strategies for Learning Questionnaire in a sample of 656 Chinese secondary students in an English learning context. Exploratory factor analysis and confirmatory factor analysis results suggested that a five-factor motivational structure fit the data better as opposed to the original six-factor motivational model reported by Pintrich and his colleagues. Reliability coefficients of these five motivational subscales (i.e., intrinsic value, extrinsic goal orientation, control of learning beliefs, self-efficacy for learning and performance , and test anxiety) were in the adequate to good range. All motivational subscales except test anxiety were positively correlated with metacognitive regulation and/or students’ self-rated English proficiency. The second-order CFA further provided empirical evidence to consider a common and broad motivational factor that can be inferred from the five subscales.
{"title":"Examining the Psychometric Properties of the Motivational Scale of Motivated Strategies for Learning Questionnaire for English Learning Among Chinese Secondary Students","authors":"Zhengdong Gan, Zheng Yuan, Randall Schumacker","doi":"10.1177/07342829231193064","DOIUrl":"https://doi.org/10.1177/07342829231193064","url":null,"abstract":"This study examined the psychometric properties of the Motivational scale of the Motivated Strategies for Learning Questionnaire in a sample of 656 Chinese secondary students in an English learning context. Exploratory factor analysis and confirmatory factor analysis results suggested that a five-factor motivational structure fit the data better as opposed to the original six-factor motivational model reported by Pintrich and his colleagues. Reliability coefficients of these five motivational subscales (i.e., intrinsic value, extrinsic goal orientation, control of learning beliefs, self-efficacy for learning and performance , and test anxiety) were in the adequate to good range. All motivational subscales except test anxiety were positively correlated with metacognitive regulation and/or students’ self-rated English proficiency. The second-order CFA further provided empirical evidence to consider a common and broad motivational factor that can be inferred from the five subscales.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135015773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}