Pub Date : 2010-06-29DOI: 10.1027/1614-2241/A000012
H. Clarke, A. Kornberg, T. Scotto
Survey research on political efficacy is longstanding. In a number of countries efficacy has been measured using batteries of negatively worded “agree-disagree” statements. In this paper, we investigate the measurement properties of the Canadian variant of this traditional battery and compare its performance with an alternative, positively worded, battery. The research is based on data gathered by a random half-sample experiment administered in the 2004 Political Support in Canada national panel survey. Analyses of these data provide no evidence that negatively framing the statements designed to tap political efficacy is problematic. Rather, it appears that students of political efficacy would have been worse off if they had spent the past several decades conducting analyses employing positively worded variants of the traditional statements. Perhaps most important, scholars have not been misled by acquiescence bias depressing efficacious responses to the traditional battery. These experimental results ind...
{"title":"Accentuating the negative?: A political efficacy question-wording- experiment","authors":"H. Clarke, A. Kornberg, T. Scotto","doi":"10.1027/1614-2241/A000012","DOIUrl":"https://doi.org/10.1027/1614-2241/A000012","url":null,"abstract":"Survey research on political efficacy is longstanding. In a number of countries efficacy has been measured using batteries of negatively worded “agree-disagree” statements. In this paper, we investigate the measurement properties of the Canadian variant of this traditional battery and compare its performance with an alternative, positively worded, battery. The research is based on data gathered by a random half-sample experiment administered in the 2004 Political Support in Canada national panel survey. Analyses of these data provide no evidence that negatively framing the statements designed to tap political efficacy is problematic. Rather, it appears that students of political efficacy would have been worse off if they had spent the past several decades conducting analyses employing positively worded variants of the traditional statements. Perhaps most important, scholars have not been misled by acquiescence bias depressing efficacious responses to the traditional battery. These experimental results ind...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"6 1","pages":"107-117"},"PeriodicalIF":3.1,"publicationDate":"2010-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-20DOI: 10.1027/1614-2241/A000001
L. A. van der Ark, Jeroen K. Vermunt
In this special issue you will find four papers on handling missing data. All papers have been presented at the 2007 Fall Meeting of Social Science Division of the Dutch Statistical Society (VVS-OR) in Tilburg, The Netherlands. Together, these four papers give an excellent overview of state of the art in missing data analysis. To date, in virtually all fields of the social sciences, researchers are required to deal sophistically with missing data. Ignoring the problem, for example, by simply removing all observations that contain missing data or thoughtlessly applying software that makes the problem go away may lead to seriously biased statistical results and wrong conclusions, and is no longer an option. Instead the researcher must consider the reasons why some of the data are missing and act accordingly. Given that in the social sciences most data are obtained from respondents who responded to tests, questionnaires, surveys, or stimuli in an experimental setting, the first option that comes to mind is approaching those respondents with missing scores again, ask them the reason for their nonresponse, and ask them to respond yet. Unfortunately, this is usually not a realistic option and the researcher must rely on statistical solutions. One way of dealing with missing data is to incorporate the mechanism that caused the missingness into the statistical modeling of the data. In the context of educational measurement, Goegebeur, De Boeck, and Molenberghs (2010) discuss test speededness, which refers to the phenomenon that respondents do not respond to certain items in the test or examination due to a lack of time. They clearly explain how speededness can be incorporated into the statistical model. Using this model-based approach, they show how to identify respondents whose scores were affected by speededness. Advantage of this approach is that it allows the researcher to deal with data that are not missing at random. In some situations, it will not be possible to translate the researcher’s theories on the missingness mechanism into a statistical model because such theories are too complex or not available. Probably the best known strategy to deal with missing data is to assume that the missing scores are missing at random and conduct (multiple) imputation: Replacing the missing scores in the data by plausible values. Two papers discuss imputation methods. First, Van Ginkel, Sijtsma, Van der Ark, and Vermunt (2010) investigated the occurrence of missing data and current practices of handling nonresponse in test and questionnaire data in personality psychology. They found that in the large majority of published research reporting missing data, either the handling of missing data was not discussed, cases with missing values were deleted, or ad hoc procedures were used. In order to improve the use of appropriate methods they proposed using Method Two-Way for handling missing data in test and questionnaire data. Method Two-Way is a multiple imputation t
在本期特刊中,您将找到四篇关于处理丢失数据的论文。所有论文已在荷兰蒂尔堡举行的荷兰统计学会(VVS-OR)社会科学部2007年秋季会议上发表。总之,这四篇论文给出了在缺失数据分析的艺术状态的一个很好的概述。迄今为止,在几乎所有的社会科学领域,研究人员都需要巧妙地处理缺失的数据。忽略这个问题,例如,通过简单地删除所有包含缺失数据的观察结果或轻率地应用使问题消失的软件可能导致严重偏颇的统计结果和错误的结论,并且不再是一种选择。相反,研究人员必须考虑一些数据丢失的原因,并采取相应的行动。考虑到在社会科学中,大多数数据都是从在实验环境中对测试、问卷、调查或刺激做出回应的受访者那里获得的,我想到的第一个选择是再次接近那些分数缺失的受访者,询问他们不回应的原因,并要求他们立即回应。不幸的是,这通常不是一个现实的选择,研究人员必须依靠统计解决方案。处理缺失数据的一种方法是将导致缺失的机制合并到数据的统计建模中。在教育测量的背景下,Goegebeur, De Boeck, and Molenberghs(2010)讨论了测试速度,它是指被调查者由于缺乏时间而对测试或考试中的某些项目不做出反应的现象。他们清楚地解释了如何将速度纳入统计模型。使用这种基于模型的方法,他们展示了如何识别得分受速度影响的受访者。这种方法的优点是它允许研究人员处理不是随机丢失的数据。在某些情况下,将研究人员关于缺失机制的理论转化为统计模型是不可能的,因为这些理论过于复杂或不可用。处理缺失数据的最佳策略可能是假设缺失的分数是随机缺失的,并进行(多重)imputation:用可信的值替换数据中缺失的分数。两篇论文讨论了归算方法。首先,Van Ginkel, Sijtsma, Van der Ark, and vermont(2010)调查了人格心理学中测试和问卷数据中缺失数据的发生和处理无反应的现行做法。他们发现,在绝大多数报告缺失数据的已发表研究中,要么没有讨论对缺失数据的处理,要么删除了缺失值的案例,要么使用了特别程序。为了提高方法的适用性,提出了采用方法双向法处理试验数据和问卷数据中的缺失数据。方法双向是一种容易理解和使用的多重输入。仿真研究表明,对于测试和问卷数据分析中经常使用的统计数据,Method two所获得的结果与技术上更先进的方法所获得的结果相当。在第二篇关于多重输入的论文中,Van Buuren(2010)讨论了完全条件规范来输入缺失值的分数。完全条件规范可以看作是技术上更高级的方法,在R和SPSS等软件包中都有。在一项模拟研究中,Van Buuren(2010)表明,在计算Cronbach 's alpha时,完全条件规范优于Method TwoWay。由于Van Ginkel et al.(2010)和Van Buuren(2010)的论文就Method Two-Way得出了不同的结论,我们认为一些编辑评论是为了解释不同的结果。我们认为这两篇论文都是高质量的,但侧重点不同。首先,Van Buuren(2010)的研究和Van Ginkel等人(2010)的研究中缺失数据的百分比不同。一方面,Van Buuren(2010)使用大缺失百分比(44-78%)比较了方法双向和完全条件规范,在极端情况下,技术上更先进的方法比简单的方法表现出更优越的性能。另一方面,Van Ginkel et al.(2010)表明,在实践中缺失的百分比要低得多(平均9%的响应向量至少有一个缺失观测值),并参考了缺失百分比在1到20之间的研究,在典型情况下,简单而复杂的方法表现相似。此外,由于缺失率很高,更复杂的贝叶斯版本的双向方法(Van Ginkel, Van der Ark,
{"title":"New Developments in Missing Data Analysis","authors":"L. A. van der Ark, Jeroen K. Vermunt","doi":"10.1027/1614-2241/A000001","DOIUrl":"https://doi.org/10.1027/1614-2241/A000001","url":null,"abstract":"In this special issue you will find four papers on handling missing data. All papers have been presented at the 2007 Fall Meeting of Social Science Division of the Dutch Statistical Society (VVS-OR) in Tilburg, The Netherlands. Together, these four papers give an excellent overview of state of the art in missing data analysis. To date, in virtually all fields of the social sciences, researchers are required to deal sophistically with missing data. Ignoring the problem, for example, by simply removing all observations that contain missing data or thoughtlessly applying software that makes the problem go away may lead to seriously biased statistical results and wrong conclusions, and is no longer an option. Instead the researcher must consider the reasons why some of the data are missing and act accordingly. Given that in the social sciences most data are obtained from respondents who responded to tests, questionnaires, surveys, or stimuli in an experimental setting, the first option that comes to mind is approaching those respondents with missing scores again, ask them the reason for their nonresponse, and ask them to respond yet. Unfortunately, this is usually not a realistic option and the researcher must rely on statistical solutions. One way of dealing with missing data is to incorporate the mechanism that caused the missingness into the statistical modeling of the data. In the context of educational measurement, Goegebeur, De Boeck, and Molenberghs (2010) discuss test speededness, which refers to the phenomenon that respondents do not respond to certain items in the test or examination due to a lack of time. They clearly explain how speededness can be incorporated into the statistical model. Using this model-based approach, they show how to identify respondents whose scores were affected by speededness. Advantage of this approach is that it allows the researcher to deal with data that are not missing at random. In some situations, it will not be possible to translate the researcher’s theories on the missingness mechanism into a statistical model because such theories are too complex or not available. Probably the best known strategy to deal with missing data is to assume that the missing scores are missing at random and conduct (multiple) imputation: Replacing the missing scores in the data by plausible values. Two papers discuss imputation methods. First, Van Ginkel, Sijtsma, Van der Ark, and Vermunt (2010) investigated the occurrence of missing data and current practices of handling nonresponse in test and questionnaire data in personality psychology. They found that in the large majority of published research reporting missing data, either the handling of missing data was not discussed, cases with missing values were deleted, or ad hoc procedures were used. In order to improve the use of appropriate methods they proposed using Method Two-Way for handling missing data in test and questionnaire data. Method Two-Way is a multiple imputation t","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"6 1","pages":"1-2"},"PeriodicalIF":3.1,"publicationDate":"2010-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241/A000001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-20DOI: 10.1027/1614-2241/A000003
J. V. van Ginkel, K. Sijtsma, L. A. van der Ark, J. Vermunt
The focus of this study was the incidence of different kinds of missing-data problems in personality research and the handling of these problems. Missing-data problems were reported in approximately half of more than 800 articles published in three leading personality journals. In these articles, unit nonresponse, attrition, and planned missingness were distinguished but missing item scores in trait measurement were reported most frequently. Listwise deletion was the most frequently used method for handling all missing-data problems. Listwise deletion is known to reduce the accuracy of parameter estimates and the power of statistical tests and often to produce biased statistical analysis results. This study proposes a simple alternative method for handling missing item scores, known as two-way imputation, which leaves the sample size intact and has been shown to produce almost unbiased results based on multi-item questionnaire data.
{"title":"Incidence of Missing Item Scores in Personality Measurement, and Simple Item-Score Imputation","authors":"J. V. van Ginkel, K. Sijtsma, L. A. van der Ark, J. Vermunt","doi":"10.1027/1614-2241/A000003","DOIUrl":"https://doi.org/10.1027/1614-2241/A000003","url":null,"abstract":"The focus of this study was the incidence of different kinds of missing-data problems in personality research and the handling of these problems. Missing-data problems were reported in approximately half of more than 800 articles published in three leading personality journals. In these articles, unit nonresponse, attrition, and planned missingness were distinguished but missing item scores in trait measurement were reported most frequently. Listwise deletion was the most frequently used method for handling all missing-data problems. Listwise deletion is known to reduce the accuracy of parameter estimates and the power of statistical tests and often to produce biased statistical analysis results. This study proposes a simple alternative method for handling missing item scores, known as two-way imputation, which leaves the sample size intact and has been shown to produce almost unbiased results based on multi-item questionnaire data.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"6 1","pages":"17-30"},"PeriodicalIF":3.1,"publicationDate":"2010-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-20DOI: 10.1027/1614-2241/A000004
S. Buuren
Imputation of incomplete questionnaire items should preserve the structure among items and the correlations between scales. This paper explores the use of fully conditional specification (FCS) to impute missing data in questionnaire items. FCS is particularly attractive for items because it does not require (1) a specification of the number of factors or classes, (2) a specification of which item belongs to which scale, and (3) assumptions about conditional independence among items. Imputation models can be specified using standard features of the R package MICE 1.16. A limited simulation shows that MICE outperforms two-way imputation with respect to Cronbach’s α and the correlations between scales. We conclude that FCS is a promising alternative for imputing incomplete questionnaire items.
{"title":"Item Imputation Without Specifying Scale Structure","authors":"S. Buuren","doi":"10.1027/1614-2241/A000004","DOIUrl":"https://doi.org/10.1027/1614-2241/A000004","url":null,"abstract":"Imputation of incomplete questionnaire items should preserve the structure among items and the correlations between scales. This paper explores the use of fully conditional specification (FCS) to impute missing data in questionnaire items. FCS is particularly attractive for items because it does not require (1) a specification of the number of factors or classes, (2) a specification of which item belongs to which scale, and (3) assumptions about conditional independence among items. Imputation models can be specified using standard features of the R package MICE 1.16. A limited simulation shows that MICE outperforms two-way imputation with respect to Cronbach’s α and the correlations between scales. We conclude that FCS is a promising alternative for imputing incomplete questionnaire items.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"6 1","pages":"31-36"},"PeriodicalIF":3.1,"publicationDate":"2010-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-20DOI: 10.1027/1614-2241/A000005
S. Vansteelandt, J. Carpenter, M. Kenward
This article reviews inverse probability weighting methods and doubly robust estimation methods for the analysis of incomplete data sets. We first consider methods for estimating a population mean when the outcome is missing at random, in the sense that measured covariates can explain whether or not the outcome is observed. We then sketch the rationale of these methods and elaborate on their usefulness in the presence of influential inverse weights. We finally outline how to apply these methods in a variety of settings, such as for fitting regression models with incomplete outcomes or covariates, emphasizing the use of standard software programs.
{"title":"Analysis of Incomplete Data Using Inverse Probability Weighting and Doubly Robust Estimators","authors":"S. Vansteelandt, J. Carpenter, M. Kenward","doi":"10.1027/1614-2241/A000005","DOIUrl":"https://doi.org/10.1027/1614-2241/A000005","url":null,"abstract":"This article reviews inverse probability weighting methods and doubly robust estimation methods for the analysis of incomplete data sets. We first consider methods for estimating a population mean when the outcome is missing at random, in the sense that measured covariates can explain whether or not the outcome is observed. We then sketch the rationale of these methods and elaborate on their usefulness in the presence of influential inverse weights. We finally outline how to apply these methods in a variety of settings, such as for fitting regression models with incomplete outcomes or covariates, emphasizing the use of standard software programs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"6 1","pages":"37-48"},"PeriodicalIF":3.1,"publicationDate":"2010-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-11-09DOI: 10.1027/1614-2241.5.4.123
R. Bersabé, Teresa Rivas, C. Berrocal
From the proportional odds (PO) model, we obtain general equations to compute multiple cut scores on a test score. This analytical procedure is based on the relationship between a test score (X) and an ordinal outcome variable (Y) with more than two categories. Cut scores are established at the test scores corresponding to the intersection of adjacent category distributions. The application of this procedure is illustrated by an example with data from an actual study on eating disorders (EDs). In this example, two cut scores on the Eating Attitudes Test (EAT-26) are established in order to differentiate between three ordered categories: (1) asymptomatic, (2) symptomatic, and (3) eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalizes DSM-IV criteria for EDs. Alternatives to the PO model, when the PO assumption is rejected, are discussed.
{"title":"Obtaining Equations From the Proportional Odds Model to Set Multiple Cut Scores on a Test","authors":"R. Bersabé, Teresa Rivas, C. Berrocal","doi":"10.1027/1614-2241.5.4.123","DOIUrl":"https://doi.org/10.1027/1614-2241.5.4.123","url":null,"abstract":"From the proportional odds (PO) model, we obtain general equations to compute multiple cut scores on a test score. This analytical procedure is based on the relationship between a test score (X) and an ordinal outcome variable (Y) with more than two categories. Cut scores are established at the test scores corresponding to the intersection of adjacent category distributions. The application of this procedure is illustrated by an example with data from an actual study on eating disorders (EDs). In this example, two cut scores on the Eating Attitudes Test (EAT-26) are established in order to differentiate between three ordered categories: (1) asymptomatic, (2) symptomatic, and (3) eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalizes DSM-IV criteria for EDs. Alternatives to the PO model, when the PO assumption is rejected, are discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"5 1","pages":"123-130"},"PeriodicalIF":3.1,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241.5.4.123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-23DOI: 10.1027/1614-2241.5.3.71
M. Eid, Fridtjof W. Nussbeck
Fifty years ago, in 1959, Campbell and Fiske published one of the most influential papers in psychology. In their article Convergent and discriminant validation by the multitraitmultimethod matrix, they argued that it is not sufficient to consider one single operationalization of one construct for purposes of test validation but that multiple measures of multiple constructs are necessary. Campbell and Fiske recommended using at least two methods that are as different as possible for measuring the constructs. Moreover, Campbell and Fiske made clear that it is not possible to get a measure of a trait that is free of method-specific influences. Whenever, in science, we measure a construct (a trait) we have to use a specific measurement method. Therefore, it is the trait and the method that influence the observed score simultaneously. In order to separate methodfrom traitspecific influences, it is thus always necessary to consider more than one trait and more than one method in the validation process. Campbell and Fiske proposed the multitraitmultimethod (MTMM) matrix for analyzing the convergent and discriminant validity. The MTMM matrix consists of the correlations between all multiple measures representing the different traits measured by the different methods. These correlations can be evaluated by several criteria that have been developed by Campbell and Fiske. If the different measures of the same construct are highly correlated, this proves convergent validity. If the different measures of one construct are not correlated with the measures of another construct, this indicates discriminant validity. Campbell and Fiske’s article had and has an enormous influence on psychology (Eid & Diener, 2006). It is the most often cited paper that has ever been published in Psychological Bulletin (Sternberg, 1992). To date, it has been cited 4,735 times (Social Science Citation Index, February 27, 2009, 3:41 pm), and its citation rate is increasing. Their article does not only have an important impact on test validation studies but also has a strong impact on methodological research as many researchers have developed new approaches for analyzing MTMM data and tried to overcome some of the problems and limitations that are related to former approaches of analyzing MTMM matrices. This special issue is dedicated to honoring Campbell and Fiske’s influential work. It presents three different modern approaches for analyzing MTMM data. All contributors use the same data set illustrating their approaches. This enables readers to concentrate on the comparison of the different approaches with respect to the way convergent and discriminant validity can be analyzed as well as how traitand method-specific influences can be identified and quantified. The data consists of three personality traits (extraversion, neuroticism, and conscientiousness) assessed by three raters (one selfand two peer raters). Each scale consists of four items (adjectives such as talkative, conscie
{"title":"The Multitrait-Multimethod Matrix at 50!","authors":"M. Eid, Fridtjof W. Nussbeck","doi":"10.1027/1614-2241.5.3.71","DOIUrl":"https://doi.org/10.1027/1614-2241.5.3.71","url":null,"abstract":"Fifty years ago, in 1959, Campbell and Fiske published one of the most influential papers in psychology. In their article Convergent and discriminant validation by the multitraitmultimethod matrix, they argued that it is not sufficient to consider one single operationalization of one construct for purposes of test validation but that multiple measures of multiple constructs are necessary. Campbell and Fiske recommended using at least two methods that are as different as possible for measuring the constructs. Moreover, Campbell and Fiske made clear that it is not possible to get a measure of a trait that is free of method-specific influences. Whenever, in science, we measure a construct (a trait) we have to use a specific measurement method. Therefore, it is the trait and the method that influence the observed score simultaneously. In order to separate methodfrom traitspecific influences, it is thus always necessary to consider more than one trait and more than one method in the validation process. Campbell and Fiske proposed the multitraitmultimethod (MTMM) matrix for analyzing the convergent and discriminant validity. The MTMM matrix consists of the correlations between all multiple measures representing the different traits measured by the different methods. These correlations can be evaluated by several criteria that have been developed by Campbell and Fiske. If the different measures of the same construct are highly correlated, this proves convergent validity. If the different measures of one construct are not correlated with the measures of another construct, this indicates discriminant validity. Campbell and Fiske’s article had and has an enormous influence on psychology (Eid & Diener, 2006). It is the most often cited paper that has ever been published in Psychological Bulletin (Sternberg, 1992). To date, it has been cited 4,735 times (Social Science Citation Index, February 27, 2009, 3:41 pm), and its citation rate is increasing. Their article does not only have an important impact on test validation studies but also has a strong impact on methodological research as many researchers have developed new approaches for analyzing MTMM data and tried to overcome some of the problems and limitations that are related to former approaches of analyzing MTMM matrices. This special issue is dedicated to honoring Campbell and Fiske’s influential work. It presents three different modern approaches for analyzing MTMM data. All contributors use the same data set illustrating their approaches. This enables readers to concentrate on the comparison of the different approaches with respect to the way convergent and discriminant validity can be analyzed as well as how traitand method-specific influences can be identified and quantified. The data consists of three personality traits (extraversion, neuroticism, and conscientiousness) assessed by three raters (one selfand two peer raters). Each scale consists of four items (adjectives such as talkative, conscie","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"18 1","pages":"71-71"},"PeriodicalIF":3.1,"publicationDate":"2009-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-23DOI: 10.1027/1614-2241.5.3.88
Fridtjof W. Nussbeck, M. Eid, C. Geiser, D. Courvoisier, T. Lischetzke
Many psychologists collect multitrait-multimethod (MTMM) data to assess the convergent and discriminant validity of psychological measures. In order to choose the most appropriate model, the types of methods applied have to be considered. It is shown how the combination of interchangeable and structurally different raters can be analyzed with an extension of the correlated trait-correlated method minus one [CTC(M−1)] model. This extension allows for disentangling individual rater biases (unique method effects) from shared rater biases (common method effects). The basic ideas of this model are presented and illustrated by an empirical example.
{"title":"A CTC(M−1) Model for Different Types of Raters","authors":"Fridtjof W. Nussbeck, M. Eid, C. Geiser, D. Courvoisier, T. Lischetzke","doi":"10.1027/1614-2241.5.3.88","DOIUrl":"https://doi.org/10.1027/1614-2241.5.3.88","url":null,"abstract":"Many psychologists collect multitrait-multimethod (MTMM) data to assess the convergent and discriminant validity of psychological measures. In order to choose the most appropriate model, the types of methods applied have to be considered. It is shown how the combination of interchangeable and structurally different raters can be analyzed with an extension of the correlated trait-correlated method minus one [CTC(M−1)] model. This extension allows for disentangling individual rater biases (unique method effects) from shared rater biases (common method effects). The basic ideas of this model are presented and illustrated by an empirical example.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"5 1","pages":"88-98"},"PeriodicalIF":3.1,"publicationDate":"2009-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-07-23DOI: 10.1027/1614-2241.5.3.78
F. Oort
Multitrait-multimethod (MTMM) data are characterized by three modes: traits, methods, and subjects. Considering subjects as random, and traits and methods as fixed, stochastic three-mode models can be used to analyze MTMM covariance data. Stochastic three-mode models can be written as linear latent variable models with direct product (DP) restrictions on the parameter matrices (Oort, 1999), yielding three-mode factor models (Bentler & Lee, 1979) and composite direct product models (Browne, 1984) as special cases. DP restrictions on factor loadings and factor correlations facilitate interpretation of the results and enable easy evaluation of the validity requirements of MTMM correlations (Campbell & Fiske, 1959). As an illustrative example, a series of stochastic three-mode models has been fitted to data of three personality traits of 482 students, measured with 12 items, through three methods.
{"title":"Three-Mode Models for Multitrait-Multimethod Data","authors":"F. Oort","doi":"10.1027/1614-2241.5.3.78","DOIUrl":"https://doi.org/10.1027/1614-2241.5.3.78","url":null,"abstract":"Multitrait-multimethod (MTMM) data are characterized by three modes: traits, methods, and subjects. Considering subjects as random, and traits and methods as fixed, stochastic three-mode models can be used to analyze MTMM covariance data. Stochastic three-mode models can be written as linear latent variable models with direct product (DP) restrictions on the parameter matrices (Oort, 1999), yielding three-mode factor models (Bentler & Lee, 1979) and composite direct product models (Browne, 1984) as special cases. DP restrictions on factor loadings and factor correlations facilitate interpretation of the results and enable easy evaluation of the validity requirements of MTMM correlations (Campbell & Fiske, 1959). As an illustrative example, a series of stochastic three-mode models has been fitted to data of three personality traits of 482 students, measured with 12 items, through three methods.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"5 1","pages":"78-87"},"PeriodicalIF":3.1,"publicationDate":"2009-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-07DOI: 10.1027/1614-2241.4.2.67
E. Korendijk, C. Maas, M. Moerbeek, P. Heijden
Like in ordinary regression models, in multilevel analysis, homoscedasticity of the residual variances is an assumption that is mostly unchecked. However, in experimental research, the residual variance component at level two may differ in the experimental and the control condition, leading to heteroscedastic second level variances. Using a simulation study, the consequences of ignoring second level heteroscedasticity on the estimation of the fixed and random parameters and their standard errors was investigated. It was found that the standard error of the second level variance is underestimated, but that the estimated fixed parameters of the independent variables, the first level variance and their standard errors are mostly unbiased.
{"title":"The Influence of Misspecification of the Heteroscedasticity on Multilevel Regression Parameter and Standard Error Estimates","authors":"E. Korendijk, C. Maas, M. Moerbeek, P. Heijden","doi":"10.1027/1614-2241.4.2.67","DOIUrl":"https://doi.org/10.1027/1614-2241.4.2.67","url":null,"abstract":"Like in ordinary regression models, in multilevel analysis, homoscedasticity of the residual variances is an assumption that is mostly unchecked. However, in experimental research, the residual variance component at level two may differ in the experimental and the control condition, leading to heteroscedastic second level variances. Using a simulation study, the consequences of ignoring second level heteroscedasticity on the estimation of the fixed and random parameters and their standard errors was investigated. It was found that the standard error of the second level variance is underestimated, but that the estimated fixed parameters of the independent variables, the first level variance and their standard errors are mostly unbiased.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"4 1","pages":"67-72"},"PeriodicalIF":3.1,"publicationDate":"2008-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}