D. Jackson, G. Michaelides, Christopher Dewberry, Amanda Jones, S. Toms, Benjamin Schwencke, Wei-Ning Yang
{"title":"Uncertainty about Rater Variance and Small Dimension Effects Impact Reliability in Supervisor Ratings","authors":"D. Jackson, G. Michaelides, Christopher Dewberry, Amanda Jones, S. Toms, Benjamin Schwencke, Wei-Ning Yang","doi":"10.1080/08959285.2022.2111433","DOIUrl":null,"url":null,"abstract":"ABSTRACT We modeled the effects commonly described as defining the measurement structure of supervisor performance ratings. In doing so, we contribute to different theoretical perspectives, including components of the multifactor and mediated models of performance ratings. Across two reanalyzed samples (Sample 1, N ratees = 392, N raters = 244; Sample 2, N ratees = 342, N raters = 397), we found a structure primarily reflective of general (>27% of variance explained) and rater-related (>49%) effects, with relatively small performance dimension effects (between 1% and 11%). We drew on findings from the assessment center literature to approximate the proportion of rater variance that might theoretically contribute to reliability in performance ratings. We found that even moderate contributions of rater-related variance to reliability resulted in a sizable impact on reliability estimates, drawing them closer to accepted criteria.","PeriodicalId":47825,"journal":{"name":"Human Performance","volume":"35 1","pages":"278 - 301"},"PeriodicalIF":2.9000,"publicationDate":"2022-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Performance","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/08959285.2022.2111433","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT We modeled the effects commonly described as defining the measurement structure of supervisor performance ratings. In doing so, we contribute to different theoretical perspectives, including components of the multifactor and mediated models of performance ratings. Across two reanalyzed samples (Sample 1, N ratees = 392, N raters = 244; Sample 2, N ratees = 342, N raters = 397), we found a structure primarily reflective of general (>27% of variance explained) and rater-related (>49%) effects, with relatively small performance dimension effects (between 1% and 11%). We drew on findings from the assessment center literature to approximate the proportion of rater variance that might theoretically contribute to reliability in performance ratings. We found that even moderate contributions of rater-related variance to reliability resulted in a sizable impact on reliability estimates, drawing them closer to accepted criteria.
期刊介绍:
Human Performance publishes research investigating the nature and role of performance in the workplace and in organizational settings and offers a rich variety of information going beyond the study of traditional job behavior. Dedicated to presenting original research, theory, and measurement methods, the journal investigates individual, team, and firm level performance factors that influence work and organizational effectiveness. Human Performance is a respected forum for behavioral scientists interested in variables that motivate and promote high-level human performance, particularly in organizational and occupational settings. The journal seeks to identify and stimulate relevant research, communication, and theory concerning human capabilities and effectiveness. It serves as a valuable intellectual link between such disciplines as industrial-organizational psychology, individual differences, work physiology, organizational behavior, human resource management, and human factors.