Factor mixture modeling (FMM) has been widely adopted in health and behavioral sciences to examine unobserved population heterogeneity. Covariates are often included in FMM as predictors of the latent class membership via multinomial logistic regression to help understand the formation and characterization of population heterogeneity. However, interaction effects among covariates have received considerably less attention, which might be attributable to the fact that interaction effects cannot be identified in a straightforward fashion. This study demonstrated the utility of structural equation model or SEM trees as an exploratory method to automatically search for covariate interactions that might explain heterogeneity in FMM. That is, following FMM analyses, SEM trees are conducted to identify covariate interactions. Next, latent class membership is regressed on the covariate interactions as well as all main effects of covariates. This approach was demonstrated using the Traumatic Brain Injury Model System National Database.
{"title":"Incorporating machine learning into factor mixture modeling: Identification of covariate interactions to explain population heterogeneity","authors":"Yan Wang, Tonghui Xu, Jiabin Shen","doi":"10.5964/meth.9487","DOIUrl":"https://doi.org/10.5964/meth.9487","url":null,"abstract":"<p xmlns=\"http://www.ncbi.nlm.nih.gov/JATS1\">Factor mixture modeling (FMM) has been widely adopted in health and behavioral sciences to examine unobserved population heterogeneity. Covariates are often included in FMM as predictors of the latent class membership via multinomial logistic regression to help understand the formation and characterization of population heterogeneity. However, interaction effects among covariates have received considerably less attention, which might be attributable to the fact that interaction effects cannot be identified in a straightforward fashion. This study demonstrated the utility of structural equation model or SEM trees as an exploratory method to automatically search for covariate interactions that might explain heterogeneity in FMM. That is, following FMM analyses, SEM trees are conducted to identify covariate interactions. Next, latent class membership is regressed on the covariate interactions as well as all main effects of covariates. This approach was demonstrated using the Traumatic Brain Injury Model System National Database.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The front-door model allows unbiased estimation of a total effect in the presence of unobserved confounding. This guarantee of unbiasedness hinges on a set of assumptions that can be violated in practice. We derive formulas that quantify the amount of bias for specific violations, and contrast them with bias that would be realized from a naive estimator of the effect. Some violations result in simple, monotonic increases in bias, while others lead to more complex bias, consisting of confounding bias, collider bias, and bias amplification. In some instances, these sources of bias can (partially) cancel each other out. We present ways to conduct sensitivity analyses for all violations, and provide code that performs sensitivity analyses for the linear front-door model. We finish with an applied example of the effect of math self-efficacy on educational achievement.
{"title":"Bias and sensitivity analyses for linear front-door models","authors":"Felix Thoemmes, Yongnam Kim","doi":"10.5964/meth.9205","DOIUrl":"https://doi.org/10.5964/meth.9205","url":null,"abstract":"<p xmlns=\"http://www.ncbi.nlm.nih.gov/JATS1\">The front-door model allows unbiased estimation of a total effect in the presence of unobserved confounding. This guarantee of unbiasedness hinges on a set of assumptions that can be violated in practice. We derive formulas that quantify the amount of bias for specific violations, and contrast them with bias that would be realized from a naive estimator of the effect. Some violations result in simple, monotonic increases in bias, while others lead to more complex bias, consisting of confounding bias, collider bias, and bias amplification. In some instances, these sources of bias can (partially) cancel each other out. We present ways to conduct sensitivity analyses for all violations, and provide code that performs sensitivity analyses for the linear front-door model. We finish with an applied example of the effect of math self-efficacy on educational achievement.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edgar C. Merkle, Oludare Ariyo, Sonja D. Winter, Mauricio Garnier-Villarreal
We review common situations in Bayesian latent variable models where the prior distribution that a researcher specifies differs from the prior distribution used during estimation. These situations can arise from the positive definite requirement on correlation matrices, from sign indeterminacy of factor loadings, and from order constraints on threshold parameters. The issue is especially problematic for reproducibility and for model checks that involve prior distributions, including prior predictive assessment and Bayes factors. In these cases, one might be assessing the wrong model, casting doubt on the relevance of the results. The most straightforward solution to the issue sometimes involves use of informative prior distributions. We explore other solutions and make recommendations for practice.
{"title":"Opaque prior distributions in Bayesian latent variable models","authors":"Edgar C. Merkle, Oludare Ariyo, Sonja D. Winter, Mauricio Garnier-Villarreal","doi":"10.5964/meth.11167","DOIUrl":"https://doi.org/10.5964/meth.11167","url":null,"abstract":"<p xmlns=\"http://www.ncbi.nlm.nih.gov/JATS1\">We review common situations in Bayesian latent variable models where the prior distribution that a researcher specifies differs from the prior distribution used during estimation. These situations can arise from the positive definite requirement on correlation matrices, from sign indeterminacy of factor loadings, and from order constraints on threshold parameters. The issue is especially problematic for reproducibility and for model checks that involve prior distributions, including prior predictive assessment and Bayes factors. In these cases, one might be assessing the wrong model, casting doubt on the relevance of the results. The most straightforward solution to the issue sometimes involves use of informative prior distributions. We explore other solutions and make recommendations for practice.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Désirée Schoenherr, Alisa Shugaley, Franziska Roller, Lukas A. Knitter, Bernhard Strauss, Uwe Altmann
In clinical research, the dependence of the results on the methods used is frequently discussed. In research on nonverbal synchrony, human ratings or automated methods do not lead to congruent results. Even when automated methods are used, the choice of the method and parameter settings are important to obtain congruent results. However, these are often insufficiently reported and do not meet the standard of transparency and reproducibility. This tutorial is aimed at researchers who are not familiar with the software Praat and R and shows in detail how to extract acoustic features like fundamental frequency or speech rate from video or audio files in conversations. Furthermore, it is presented how vocal synchrony indices can be calculated from these characteristics to represent how well two interaction partners vocally adapt to each other. All used scripts as well as a minimal example, can be found on the Open Science Framework and Github.
{"title":"Extracting vocal characteristics and calculating vocal synchrony using Praat and R: A tutorial","authors":"Désirée Schoenherr, Alisa Shugaley, Franziska Roller, Lukas A. Knitter, Bernhard Strauss, Uwe Altmann","doi":"10.5964/meth.9375","DOIUrl":"https://doi.org/10.5964/meth.9375","url":null,"abstract":"<p xmlns=\"http://www.ncbi.nlm.nih.gov/JATS1\">In clinical research, the dependence of the results on the methods used is frequently discussed. In research on nonverbal synchrony, human ratings or automated methods do not lead to congruent results. Even when automated methods are used, the choice of the method and parameter settings are important to obtain congruent results. However, these are often insufficiently reported and do not meet the standard of transparency and reproducibility. This tutorial is aimed at researchers who are not familiar with the software Praat and R and shows in detail how to extract acoustic features like fundamental frequency or speech rate from video or audio files in conversations. Furthermore, it is presented how vocal synchrony indices can be calculated from these characteristics to represent how well two interaction partners vocally adapt to each other. All used scripts as well as a minimal example, can be found on the Open Science Framework and Github.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135133247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims at clarifying the questions regarding the effects of the scaling method on the discrepancy function of the metric measurement invariance model. We provide examples and a formal account showing that neither the choice of the scaling method in general nor the choice of a particular referent indicator affects the value of the discrepancy function. Thus, the test statistic is not affected by the scaling method, either. The results rely on an appropriate application of the scaling restrictions, which can be phrased as a simple rule: "Apply the scaling restriction in one group only!" We develop formulas to calculate the degrees of freedom of χ²-difference tests comparing metric models to the corresponding configural model. Our findings show that it is impossible to test the invariance of the estimated loading of exactly one indicator, because metric MI models aimed at doing so are actually equivalent to the configural model.
{"title":"Scaling metric measurement invariance models","authors":"Eric Klopp, Stefan Klößner","doi":"10.5964/meth.10177","DOIUrl":"https://doi.org/10.5964/meth.10177","url":null,"abstract":"<p xmlns=\"http://www.ncbi.nlm.nih.gov/JATS1\">This paper aims at clarifying the questions regarding the effects of the scaling method on the discrepancy function of the metric measurement invariance model. We provide examples and a formal account showing that neither the choice of the scaling method in general nor the choice of a particular referent indicator affects the value of the discrepancy function. Thus, the test statistic is not affected by the scaling method, either. The results rely on an appropriate application of the scaling restrictions, which can be phrased as a simple rule: \"Apply the scaling restriction in one group only!\" We develop formulas to calculate the degrees of freedom of χ²-difference tests comparing metric models to the corresponding configural model. Our findings show that it is impossible to test the invariance of the estimated loading of exactly one indicator, because metric MI models aimed at doing so are actually equivalent to the configural model.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135133113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1027/1614-2241/a000162
Mariëlle Zondervan-Zwijnenburg, S. Depaoli, M. Peeters, R. van de Schoot
Longitudinal developmental research is often focused on patterns of change or growth across different (sub)groups of individuals. Particular to some research contexts, developmental inquiries may involve one or more (sub)groups that are small in nature and therefore difficult to properly capture through statistical analysis. The current study explores the lower-bound limits of subsample sizes in a multiple group latent growth modeling by means of a simulation study. We particularly focus on how the maximum likelihood (ML) and Bayesian estimation approaches differ when (sub)sample sizes are small. The results show that Bayesian estimation resolves computational issues that occur with ML estimation and that the addition of prior information can be the key to detect a difference between groups when sample and effect sizes are expected to be limited. The acquisition of prior information with respect to the smaller group is especially influential in this context.
{"title":"Pushing the Limits: The Performance of Maximum Likelihood and Bayesian Estimation With Small and Unbalanced Samples in a Latent Growth Model","authors":"Mariëlle Zondervan-Zwijnenburg, S. Depaoli, M. Peeters, R. van de Schoot","doi":"10.1027/1614-2241/a000162","DOIUrl":"https://doi.org/10.1027/1614-2241/a000162","url":null,"abstract":"Longitudinal developmental research is often focused on patterns of change or growth across different (sub)groups of individuals. Particular to some research contexts, developmental inquiries may involve one or more (sub)groups that are small in nature and therefore difficult to properly capture through statistical analysis. The current study explores the lower-bound limits of subsample sizes in a multiple group latent growth modeling by means of a simulation study. We particularly focus on how the maximum likelihood (ML) and Bayesian estimation approaches differ when (sub)sample sizes are small. The results show that Bayesian estimation resolves computational issues that occur with ML estimation and that the addition of prior information can be the key to detect a difference between groups when sample and effect sizes are expected to be limited. The acquisition of prior information with respect to the smaller group is especially influential in this context.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"15 1","pages":"31–43"},"PeriodicalIF":3.1,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45944329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1027/1614-2241/a000161
Knut Petzold, Tobias Wolbring
Factorial survey experiments are increasingly used in the social sciences to investigate behavioral intentions. The measurement of self-reported behavioral intentions with factorial survey experiments frequently assumes that the determinants of intended behavior affect actual behavior in a similar way. We critically investigate this fundamental assumption using the misdirected email technique. Student participants of a survey were randomly assigned to a field experiment or a survey experiment. The email informs the recipient about the reception of a scholarship with varying stakes (full-time vs. book) and recipient’s names (German vs. Arabic). In the survey experiment, respondents saw an image of the same email. This validation design ensured a high level of correspondence between units, settings, and treatments across both studies. Results reveal that while the frequencies of self-reported intentions and actual behavior deviate, treatments show similar relative effects. Hence, although further research on this topic is needed, this study suggests that determinants of behavior might be inferred from behavioral intentions measured with survey experiments.
{"title":"What Can We Learn From Factorial Surveys About Human Behavior?: A Validation Study Comparing Field and Survey Experiments on Discrimination","authors":"Knut Petzold, Tobias Wolbring","doi":"10.1027/1614-2241/a000161","DOIUrl":"https://doi.org/10.1027/1614-2241/a000161","url":null,"abstract":"Factorial survey experiments are increasingly used in the social sciences to investigate behavioral intentions. The measurement of self-reported behavioral intentions with factorial survey experiments frequently assumes that the determinants of intended behavior affect actual behavior in a similar way. We critically investigate this fundamental assumption using the misdirected email technique. Student participants of a survey were randomly assigned to a field experiment or a survey experiment. The email informs the recipient about the reception of a scholarship with varying stakes (full-time vs. book) and recipient’s names (German vs. Arabic). In the survey experiment, respondents saw an image of the same email. This validation design ensured a high level of correspondence between units, settings, and treatments across both studies. Results reveal that while the frequencies of self-reported intentions and actual behavior deviate, treatments show similar relative effects. Hence, although further research on this topic is needed, this study suggests that determinants of behavior might be inferred from behavioral intentions measured with survey experiments.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"15 1","pages":"19–30"},"PeriodicalIF":3.1,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1027/1614-2241/a000158
Esther T. Beierl, M. Bühner, M. Heene
Factorial validity is often assessed using confirmatory factor analysis. Model fit is commonly evaluated using the cutoff values for the fit indices proposed by Hu and Bentler (1999). There is a body of research showing that those cutoff values cannot be generalized. Model fit does not only depend on the severity of misspecification, but also on nuisance parameters, which are independent of the misspecification. Using a simulation study, we demonstrate their influence on measures of model fit. We specified a severe misspecification, omitting a second factor, which signifies factorial invalidity. Measures of model fit showed only small misfit because nuisance parameters, magnitude of factor loadings and a balanced/imbalanced number of indicators per factor, also influenced the degree of misfit. Drawing from our results, we discuss challenges in the assessment of factorial validity.
{"title":"Is That Measure Really One-Dimensional?: Nuisance Parameters Can Mask Severe Model Misspecification When Assessing Factorial Validity","authors":"Esther T. Beierl, M. Bühner, M. Heene","doi":"10.1027/1614-2241/a000158","DOIUrl":"https://doi.org/10.1027/1614-2241/a000158","url":null,"abstract":"Factorial validity is often assessed using confirmatory factor analysis. Model fit is commonly evaluated using the cutoff values for the fit indices proposed by Hu and Bentler (1999). There is a body of research showing that those cutoff values cannot be generalized. Model fit does not only depend on the severity of misspecification, but also on nuisance parameters, which are independent of the misspecification. Using a simulation study, we demonstrate their influence on measures of model fit. We specified a severe misspecification, omitting a second factor, which signifies factorial invalidity. Measures of model fit showed only small misfit because nuisance parameters, magnitude of factor loadings and a balanced/imbalanced number of indicators per factor, also influenced the degree of misfit. Drawing from our results, we discuss challenges in the assessment of factorial validity.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"14 1","pages":"188–196"},"PeriodicalIF":3.1,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41624275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1027/1614-2241/a000156
A. Gadermann, Michelle Y. Chen, S. D. Emerson, B. Zumbo
The investigation of differential item functioning (DIF) is important for any group comparison because the validity of the inferences made from scale scores could be compromised if DIF is present. DIF occurs when individuals from different groups show different probabilities of selecting a response option to an item after being matched on the underlying latent variable that the item is supposed to measure. The aim of this paper is to inform the practice of DIF analyses in survey research. We focus on three quantitative methods to detect DIF, namely nonparametric item response theory (NIRT), ordinal logistic regression (OLR), and mixed-effects or multilevel models. Using these methods, we demonstrate how to examine DIF at the item and scale levels, as well as in multilevel settings. We discuss when these techniques are appropriate to use, what data assumptions they have, and their advantages and disadvantages in the analysis of survey data.
{"title":"Examining Validity Evidence of Self-Report Measures Using Differential Item Functioning: An Illustration of Three Methods","authors":"A. Gadermann, Michelle Y. Chen, S. D. Emerson, B. Zumbo","doi":"10.1027/1614-2241/a000156","DOIUrl":"https://doi.org/10.1027/1614-2241/a000156","url":null,"abstract":"The investigation of differential item functioning (DIF) is important for any group comparison because the validity of the inferences made from scale scores could be compromised if DIF is present. DIF occurs when individuals from different groups show different probabilities of selecting a response option to an item after being matched on the underlying latent variable that the item is supposed to measure. The aim of this paper is to inform the practice of DIF analyses in survey research. We focus on three quantitative methods to detect DIF, namely nonparametric item response theory (NIRT), ordinal logistic regression (OLR), and mixed-effects or multilevel models. Using these methods, we demonstrate how to examine DIF at the item and scale levels, as well as in multilevel settings. We discuss when these techniques are appropriate to use, what data assumptions they have, and their advantages and disadvantages in the analysis of survey data.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"14 1","pages":"164–175"},"PeriodicalIF":3.1,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47116384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1027/1614-2241/a000154
K. Markus
Bollen and colleagues have advocated the use of formative scales despite the fact that formative scales lack an adequate underlying theory to guide development or validation such as that which underlies reflective scales. Three conceptual impediments impede the development of such theory: the redefinition of measurement restricted to the context of model fitting, the inscrutable notion of conceptual unity, and a systematic conflation of item scores with attributes. Setting aside these impediments opens the door to progress in developing the needed theory to support formative scale use. A broader perspective facilitates consideration of standard scale development concerns as applied to formative scales including scale development, item analysis, reliability, and item bias. While formative scales require a different pattern of emphasis, all five of the traditional sources of validity evidence apply to formative scales. Responsible use of formative scales requires greater attention to developing the requisite underlying theory.
{"title":"Three Conceptual Impediments to Developing Scale Theory for Formative Scales","authors":"K. Markus","doi":"10.1027/1614-2241/a000154","DOIUrl":"https://doi.org/10.1027/1614-2241/a000154","url":null,"abstract":"Bollen and colleagues have advocated the use of formative scales despite the fact that formative scales lack an adequate underlying theory to guide development or validation such as that which underlies reflective scales. Three conceptual impediments impede the development of such theory: the redefinition of measurement restricted to the context of model fitting, the inscrutable notion of conceptual unity, and a systematic conflation of item scores with attributes. Setting aside these impediments opens the door to progress in developing the needed theory to support formative scale use. A broader perspective facilitates consideration of standard scale development concerns as applied to formative scales including scale development, item analysis, reliability, and item bias. While formative scales require a different pattern of emphasis, all five of the traditional sources of validity evidence apply to formative scales. Responsible use of formative scales requires greater attention to developing the requisite underlying theory.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"14 1","pages":"156–163"},"PeriodicalIF":3.1,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46711045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}