Pub Date : 2023-07-03DOI: 10.1080/15366367.2022.2104565
Stefanie A. Wind, Yuan Ge
In selected-response assessments such as attitude surveys with Likert-type rating scales, examinees often select from rating scale categories to reflect their locations on a construct. Researchers have observed that some examinees exhibit response styles, which are systematic patterns of responses in which examinees are more likely to select certain response categories, regardless of their locations on the construct (Baumgartner & Steenkamp, 2001; Paulhus, 1991; Roberts, 2016; Van Vaerenbergh & Thomas, 2013). For example, a midpoint response style occurs when examinees select middle rating scale categories most often, and an extreme response style occurs when examinees tend to select extreme categories most often. Response styles complicate the interpretation of examinee and item location estimates because responses may not fully reflect examinee locations on the construct. Accordingly, response styles can present a source of construct-irrelevant variance that threatens the validity of the interpretation and use of scores (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014). To identify and minimize construct-irrelevant impacts of response styles, researchers have proposed tools such as the Partial Credit Model – Response Style (PCMRS; Tutz et al., 2018) as an extension of the Partial credit model (PCM; Masters, 1982) to model the tendency for examinees to exhibit response styles. The PCMRS directly models response styles as a person-specific gamma parameter and corrects estimates of item difficulty for the presence of response styles. Specifically, the response style is treated as a random effect, where small distances between thresholds indicate a tendency to exhibit an extreme response style and widened distances between thresholds indicate a tendency to exhibit a midpoint response style. Thus far, most research on the PCMRS has focused on the presentation of the model and statistical software tools for estimating it (Schauberger, 2020, 2020; Tutz et al., 2018; Tutz & Schauberger, 2020). However, we identified one application of this approach in which Dibek (2020) employed the PCM and the PCMRS to data from the 2015 administration of the TIMSS assessment and detected the presence of response styles among student participants. Given the lack of prior research focusing on the interpretation and use of the PCMRS in applied survey research contexts, additional explorations are warranted. We describe details about the PCMRS model parameters and interpretation more detail later in the manuscript. Researchers have also used person fit analysis (Glas & Khalid, 2016) from models based on measurement frameworks with clear guidelines for identifying meaningful response patterns. For example, researchers have used the PCM, which falls within the Rasch measurement theory framework (Rasch, 1960) to identify examinees whose patterns of responses are different
{"title":"Identifying Response Styles Using Person Fit Analysis and Response-Styles Models","authors":"Stefanie A. Wind, Yuan Ge","doi":"10.1080/15366367.2022.2104565","DOIUrl":"https://doi.org/10.1080/15366367.2022.2104565","url":null,"abstract":"In selected-response assessments such as attitude surveys with Likert-type rating scales, examinees often select from rating scale categories to reflect their locations on a construct. Researchers have observed that some examinees exhibit response styles, which are systematic patterns of responses in which examinees are more likely to select certain response categories, regardless of their locations on the construct (Baumgartner & Steenkamp, 2001; Paulhus, 1991; Roberts, 2016; Van Vaerenbergh & Thomas, 2013). For example, a midpoint response style occurs when examinees select middle rating scale categories most often, and an extreme response style occurs when examinees tend to select extreme categories most often. Response styles complicate the interpretation of examinee and item location estimates because responses may not fully reflect examinee locations on the construct. Accordingly, response styles can present a source of construct-irrelevant variance that threatens the validity of the interpretation and use of scores (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014). To identify and minimize construct-irrelevant impacts of response styles, researchers have proposed tools such as the Partial Credit Model – Response Style (PCMRS; Tutz et al., 2018) as an extension of the Partial credit model (PCM; Masters, 1982) to model the tendency for examinees to exhibit response styles. The PCMRS directly models response styles as a person-specific gamma parameter and corrects estimates of item difficulty for the presence of response styles. Specifically, the response style is treated as a random effect, where small distances between thresholds indicate a tendency to exhibit an extreme response style and widened distances between thresholds indicate a tendency to exhibit a midpoint response style. Thus far, most research on the PCMRS has focused on the presentation of the model and statistical software tools for estimating it (Schauberger, 2020, 2020; Tutz et al., 2018; Tutz & Schauberger, 2020). However, we identified one application of this approach in which Dibek (2020) employed the PCM and the PCMRS to data from the 2015 administration of the TIMSS assessment and detected the presence of response styles among student participants. Given the lack of prior research focusing on the interpretation and use of the PCMRS in applied survey research contexts, additional explorations are warranted. We describe details about the PCMRS model parameters and interpretation more detail later in the manuscript. Researchers have also used person fit analysis (Glas & Khalid, 2016) from models based on measurement frameworks with clear guidelines for identifying meaningful response patterns. For example, researchers have used the PCM, which falls within the Rasch measurement theory framework (Rasch, 1960) to identify examinees whose patterns of responses are different","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79645427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.1080/15366367.2022.2026104
Héctor Nájera
ABSTRACT Measurement error affects the quality of population orderings of an index and, hence, increases the misclassification of the poor and the non-poor groups and affects statistical inferences from binary regression models. Hence, the conclusions about the extent, profile, and distribution of poverty are likely to be misleading. However, the size and type (false positive/negatives) of classification error have remained untraceable in poverty research. This paper draws upon previous theoretical literature to develop a Bayesian-based estimator of population misclassification and binary-regression coefficient bias. The study uses the reliability values of existing poverty indices to set up a Monte Carlo study based on factor mixture models to illustrate the connections between measurement error, misclassification, and bias and evaluate the procedure and discusses its importance for real-world applications.
{"title":"Misclassification Error, Binary Regression Bias, and Reliability in Multidimensional Poverty Measurement: An Estimation Approach Based on Bayesian Modelling","authors":"Héctor Nájera","doi":"10.1080/15366367.2022.2026104","DOIUrl":"https://doi.org/10.1080/15366367.2022.2026104","url":null,"abstract":"ABSTRACT Measurement error affects the quality of population orderings of an index and, hence, increases the misclassification of the poor and the non-poor groups and affects statistical inferences from binary regression models. Hence, the conclusions about the extent, profile, and distribution of poverty are likely to be misleading. However, the size and type (false positive/negatives) of classification error have remained untraceable in poverty research. This paper draws upon previous theoretical literature to develop a Bayesian-based estimator of population misclassification and binary-regression coefficient bias. The study uses the reliability values of existing poverty indices to set up a Monte Carlo study based on factor mixture models to illustrate the connections between measurement error, misclassification, and bias and evaluate the procedure and discusses its importance for real-world applications.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78614906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.1080/15366367.2022.2156219
B. Leventhal, C. Zigler
ABSTRACT Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares to traditional approaches that either do not include anchoring vignettes or do not account for response styles. We analyze secondary data using four different models: 1) total score; 2) graded response model; 3) IRTree without the consideration of anchoring vignettes, and 4) IRTree considering anchoring vignettes. We found significant differences in trait estimates from models that account for response styles compared to those that do not. Additionally, we found differences in trait estimates between the IRTree Models when considering anchoring vignettes and when not. Model comparisons suggest that trait differences are due to adjusting for acquiescence response style.
{"title":"A Tree-Based Approach to Identifying Response Styles with Anchoring Vignettes","authors":"B. Leventhal, C. Zigler","doi":"10.1080/15366367.2022.2156219","DOIUrl":"https://doi.org/10.1080/15366367.2022.2156219","url":null,"abstract":"ABSTRACT Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares to traditional approaches that either do not include anchoring vignettes or do not account for response styles. We analyze secondary data using four different models: 1) total score; 2) graded response model; 3) IRTree without the consideration of anchoring vignettes, and 4) IRTree considering anchoring vignettes. We found significant differences in trait estimates from models that account for response styles compared to those that do not. Additionally, we found differences in trait estimates between the IRTree Models when considering anchoring vignettes and when not. Model comparisons suggest that trait differences are due to adjusting for acquiescence response style.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76990868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.1080/15366367.2022.2106536
Mengyao Zhang
{"title":"Educational Measurement for Applied Researchers: Theory into Practice","authors":"Mengyao Zhang","doi":"10.1080/15366367.2022.2106536","DOIUrl":"https://doi.org/10.1080/15366367.2022.2106536","url":null,"abstract":"","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90330401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.1080/15366367.2022.2133528
T. Raykov
ABSTRACT This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting several polytomous models are subsequently indicated, as are those facilitating model comparison. Scoring of individual units of analysis with Stata’s item response theory module is next discussed, and various graphical features of this module are pointed out. The review concludes with an illustration example of using Stata for item response modeling on an empirical data set.
{"title":"Item Response Theory and Modeling with Stata","authors":"T. Raykov","doi":"10.1080/15366367.2022.2133528","DOIUrl":"https://doi.org/10.1080/15366367.2022.2133528","url":null,"abstract":"ABSTRACT This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting several polytomous models are subsequently indicated, as are those facilitating model comparison. Scoring of individual units of analysis with Stata’s item response theory module is next discussed, and various graphical features of this module are pointed out. The review concludes with an illustration example of using Stata for item response modeling on an empirical data set.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76714734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.1080/15366367.2022.2087354
Zhongmin Cui, Yong He
ABSTRACT Careful considerations are necessary when there is a need to choose an anchor test form from a list of old test forms for equating under the random groups design. The choice of the anchor form potentially affects the accuracy of equated scores on new test forms. Few guidelines, however, can be found in the literature on choosing the anchor form. Five indices were proposed in this study to aid the selection and their performances were evaluated through a real-data-based simulation. The results shed some light on how to choose an anchor form in practice.
{"title":"Practical Considerations in Choosing an Anchor Test Form for Equating Under the Random Groups Design","authors":"Zhongmin Cui, Yong He","doi":"10.1080/15366367.2022.2087354","DOIUrl":"https://doi.org/10.1080/15366367.2022.2087354","url":null,"abstract":"ABSTRACT Careful considerations are necessary when there is a need to choose an anchor test form from a list of old test forms for equating under the random groups design. The choice of the anchor form potentially affects the accuracy of equated scores on new test forms. Few guidelines, however, can be found in the literature on choosing the anchor form. Five indices were proposed in this study to aid the selection and their performances were evaluated through a real-data-based simulation. The results shed some light on how to choose an anchor form in practice.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80174741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/15366367.2022.2151081
Michael R. Peabody
ABSTRACT Many organizations utilize some form of automation in the test assembly process; either fully algorithmic or heuristically constructed. However, one issue with heuristic models is that when the test assembly problem changes the entire model may need to be re-conceptualized and recoded. In contrast, mixed-integer programming (MIP) is a mathematical representation of the test assembly problem that looks for the statistically optimal solution. Because MIP is a mathematical representation, changes to the test assembly problem typically involve only minor changes to the programming. This review focuses on comparing two free and open-source R packages for mixed integer linear programming: inlinelpSolveAPI and inlineompr. Programming style (with code provided), ease of use, run time, and other considerations will be examined. Solvers from other open-source platforms (e.g. Python, Julia) will also be discussed. Code and sample data are also provided.
{"title":"Comparison of R Packages for Automated Test Assembly with Mixed-Integer Linear Programming","authors":"Michael R. Peabody","doi":"10.1080/15366367.2022.2151081","DOIUrl":"https://doi.org/10.1080/15366367.2022.2151081","url":null,"abstract":"ABSTRACT Many organizations utilize some form of automation in the test assembly process; either fully algorithmic or heuristically constructed. However, one issue with heuristic models is that when the test assembly problem changes the entire model may need to be re-conceptualized and recoded. In contrast, mixed-integer programming (MIP) is a mathematical representation of the test assembly problem that looks for the statistically optimal solution. Because MIP is a mathematical representation, changes to the test assembly problem typically involve only minor changes to the programming. This review focuses on comparing two free and open-source R packages for mixed integer linear programming: inlinelpSolveAPI and inlineompr. Programming style (with code provided), ease of use, run time, and other considerations will be examined. Solvers from other open-source platforms (e.g. Python, Julia) will also be discussed. Code and sample data are also provided.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85055949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/15366367.2022.2031484
Josip Novak, B. Rebernjak
ABSTRACT A Monte Carlo simulation study was conducted to examine the performance of α, λ2, λ4, μ2, ωT, GLBMRFA, and GLBAlgebraic coefficients. Population reliability, distribution shape, sample size, test length, and number of response categories were varied simultaneously. The results indicate that α and λ2 perform the worst overall. However, the performance of α is improved if the population reliability is high. λ4 is relatively unbiased but the most imprecise. μ2 and ωT perform relatively well under most conditions. GLBAlgebraic outperforms other coefficients under many conditions. GLBMRFA is useful under few conditions if the population reliability is high. The results corroborate previous suggestions that large samples, longer tests, higher number of response categories, and normally distributed results can make reliability estimates more dependable. Some insights on the interaction of these factors are provided. We discuss the findings compared to previous research. The complete R code used for the simulation is provided in the online supplement.
{"title":"There are Many Greater Lower Bounds than Cronbach’s α: A Monte Carlo Simulation Study","authors":"Josip Novak, B. Rebernjak","doi":"10.1080/15366367.2022.2031484","DOIUrl":"https://doi.org/10.1080/15366367.2022.2031484","url":null,"abstract":"ABSTRACT A Monte Carlo simulation study was conducted to examine the performance of α, λ2, λ4, μ2, ωT, GLBMRFA, and GLBAlgebraic coefficients. Population reliability, distribution shape, sample size, test length, and number of response categories were varied simultaneously. The results indicate that α and λ2 perform the worst overall. However, the performance of α is improved if the population reliability is high. λ4 is relatively unbiased but the most imprecise. μ2 and ωT perform relatively well under most conditions. GLBAlgebraic outperforms other coefficients under many conditions. GLBMRFA is useful under few conditions if the population reliability is high. The results corroborate previous suggestions that large samples, longer tests, higher number of response categories, and normally distributed results can make reliability estimates more dependable. Some insights on the interaction of these factors are provided. We discuss the findings compared to previous research. The complete R code used for the simulation is provided in the online supplement.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88547175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/15366367.2022.2031485
T. Raykov, G. Marcoulides
ABSTRACT This article outlines a readily applicable procedure for point and interval estimation of the population discrepancy between reliability and the popular Cronbach’s coefficient alpha for unidimensional multi-component measuring instruments with uncorrelated errors, which are widely used in behavioral and social research. The method is developed within the latent variable modeling framework and can be used to evaluate the degree to which coefficient alpha underestimates scale reliability in empirical measurement research employing such instruments. The approach is straight-forwardly utilized with readily available software and is illustrated using a numerical example.
{"title":"Evaluating the Discrepancy Between Scale Reliability and Cronbach’s Coefficient Alpha Using Latent Variable Modeling","authors":"T. Raykov, G. Marcoulides","doi":"10.1080/15366367.2022.2031485","DOIUrl":"https://doi.org/10.1080/15366367.2022.2031485","url":null,"abstract":"ABSTRACT This article outlines a readily applicable procedure for point and interval estimation of the population discrepancy between reliability and the popular Cronbach’s coefficient alpha for unidimensional multi-component measuring instruments with uncorrelated errors, which are widely used in behavioral and social research. The method is developed within the latent variable modeling framework and can be used to evaluate the degree to which coefficient alpha underestimates scale reliability in empirical measurement research employing such instruments. The approach is straight-forwardly utilized with readily available software and is illustrated using a numerical example.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81764480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/15366367.2022.2061236
Zhengdong Gan, Jinbo He, L. Zhang, R. Schumacker
ABSTRACT While classroom feedback has been shown to be a key mediating factor in students’ learning process and performance, the bulk of current research on feedback in the field of foreign language education has largely focused on how teachers respond to students’ linguistic errors. Published research on how students in a foreign language context respond to different kinds of classroom feedback practice has been sparse. Even less frequently reported is how different forms of classroom feedback practice may cater to students’ motivation in learning. Taking stock of theoretical perspectives concerning feedback and motivation in both educational psychology and language acquisition, this study intends to fill these gaps by investigating what classroom feedback practices tertiary foreign language students experienced, and how these feedback practices were associated with student foreign language learning motivation. Student self-feedback was found to be the most powerful predictor of their motivation for English learning. The results suggest that there is a need for a qualitative change in feedback practices in university foreign language classrooms in order that feedback processes can be deployed more effectively to benefit students’ learning.
{"title":"Examining the Relationships between Feedback Practices and Learning Motivation","authors":"Zhengdong Gan, Jinbo He, L. Zhang, R. Schumacker","doi":"10.1080/15366367.2022.2061236","DOIUrl":"https://doi.org/10.1080/15366367.2022.2061236","url":null,"abstract":"ABSTRACT While classroom feedback has been shown to be a key mediating factor in students’ learning process and performance, the bulk of current research on feedback in the field of foreign language education has largely focused on how teachers respond to students’ linguistic errors. Published research on how students in a foreign language context respond to different kinds of classroom feedback practice has been sparse. Even less frequently reported is how different forms of classroom feedback practice may cater to students’ motivation in learning. Taking stock of theoretical perspectives concerning feedback and motivation in both educational psychology and language acquisition, this study intends to fill these gaps by investigating what classroom feedback practices tertiary foreign language students experienced, and how these feedback practices were associated with student foreign language learning motivation. Student self-feedback was found to be the most powerful predictor of their motivation for English learning. The results suggest that there is a need for a qualitative change in feedback practices in university foreign language classrooms in order that feedback processes can be deployed more effectively to benefit students’ learning.","PeriodicalId":46596,"journal":{"name":"Measurement-Interdisciplinary Research and Perspectives","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76016043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}