首页 > 最新文献

Industrial and Organizational Psychology-Perspectives on Science and Practice最新文献

英文 中文
Revisiting predictor–criterion construct congruence: Implications for designing personnel selection systems 重新审视预测-标准-结构一致性:对人事选择系统设计的启示
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.35
L. Hough, F. Oswald
Overview In their focal article, Sackett et al. (in press) describe implications of their new meta-analytic estimates of validity of widely used predictors for selection of employees. Contradicting the received wisdom of Schmidt and Hunter (1998), Sackett et al. conclude that predictor methods with content specifically tailored to jobs generally have greater validity for predicting job performance than general measures reflecting psychological constructs (e.g., cognitive abilities, personality traits). They also point out that standard deviations around the mean of their metaanalytic validity estimates are often large, leading to their question “why the variability?” (p. x). They suggest many legitimate contributors. We propose an additional moderator variable of critical importance: predictor-criterion construct congruence, accounting for a great deal of variability in validity coefficients found in meta-analysis. That is, the extent to which what is measured is congruent with what is predicted is an important determinant of the level of validity obtained. Sackett et al. (2022) acknowledge that the strongest predictors in their re-analysis are job-specific measures and that a “closer behavioral match between predictor and criterion” (p. 2062) might contribute to higher validities. Many in our field have also noted the importance of “behavioral consistency” between predictors and criteria relevant to selection, while also arguing for another type of congruence: the relationships between constructs in both the predictor and criterion space (e.g., Bartram, 2005; Campbell et al., 1993; Campbell & Knapp, 2001; Hogan & Holland, 2003; Hough, 1992; Hough & Oswald, 2005; Pulakos et al., 1988; Sackett & Lievens, 2008; Schmitt & Ostroff, 1986). The above reflects an important distinction between two types of congruence: behavior-based congruence and construct-based congruence. When ‘past behavior predicts future behavior’ (as might be possible for jobs requiring past experience and where behavior-oriented employment assessments such as interviews, biodata, and work samples are involved), behavior-based congruence exists. Behavior-based assessments can vary a great deal across jobs but tend to ask about past experiences that are influenced by a complex mix of KSAOs. By contrast, constructbased congruence aligns employment tests of job-relevant KSAOs (e.g., verbal and math skills, conscientiousness) with relevant work criteria, such as technical performance or counterproductive work behavior (e.g., Campbell & Wiernik, 2015). What we are suggesting strongly here is that regardless of the approach to congruence adopted in selection, it is the congruence between predictor and criterion constructs that is a key factor
概述在他们的重点文章中,Sackett等人(出版中)描述了他们对广泛使用的员工选择预测因子有效性的新元分析估计的含义。与Schmidt和Hunter(1998)的公认观点相矛盾,Sackett等人得出结论,与反映心理结构(如认知能力、性格特征)的一般测量相比,内容专门针对工作的预测方法在预测工作表现方面通常具有更大的有效性。他们还指出,元分析有效性估计的平均值周围的标准差通常很大,这导致了他们的问题“为什么会有可变性?”(第x页)。他们提出了许多合法的贡献者。我们提出了一个额外的具有关键重要性的调节变量:预测标准-结构一致性,解释了荟萃分析中有效性系数的大量可变性。也就是说,测量的内容与预测的内容一致的程度是所获得的有效性水平的重要决定因素。Sackett等人(2022)承认,在他们的重新分析中,最强的预测因素是特定于工作的衡量标准,“预测因素和标准之间更紧密的行为匹配”(第2062页)可能有助于提高有效性。我们领域的许多人也注意到了与选择相关的预测因素和标准之间“行为一致性”的重要性,同时也主张另一种类型的一致性:预测空间和标准空间中的结构之间的关系(例如,Bartram,2005;Campbell等人,1993;Campbell&Knapp,2001;Hogan和Holland,2003;Hough,1992;Hough和Oswald,2005;Pulakos等人,1988;Sackett和Lievens,2008;Schmitt和Ostroff,1986)。以上反映了两种类型的一致性之间的重要区别:基于行为的一致性和基于结构的一致性。当“过去的行为预测未来的行为”时(对于需要过去经验的工作以及涉及面试、生物数据和工作样本等以行为为导向的就业评估的工作来说,这可能是可能的),基于行为的一致性是存在的。基于行为的评估可能因工作而异,但往往会询问过去的经历,这些经历受到复杂的KSAO组合的影响。相比之下,基于结构的一致性将与工作相关的KSAO的就业测试(例如,语言和数学技能、尽责性)与相关的工作标准相一致,例如技术表现或适得其反的工作行为(例如,Campbell&Wiernik,2015)。我们在这里强烈建议的是,无论在选择中采用何种方法来实现一致性,预测结构和标准结构之间的一致性都是一个关键因素
{"title":"Revisiting predictor–criterion construct congruence: Implications for designing personnel selection systems","authors":"L. Hough, F. Oswald","doi":"10.1017/iop.2023.35","DOIUrl":"https://doi.org/10.1017/iop.2023.35","url":null,"abstract":"Overview In their focal article, Sackett et al. (in press) describe implications of their new meta-analytic estimates of validity of widely used predictors for selection of employees. Contradicting the received wisdom of Schmidt and Hunter (1998), Sackett et al. conclude that predictor methods with content specifically tailored to jobs generally have greater validity for predicting job performance than general measures reflecting psychological constructs (e.g., cognitive abilities, personality traits). They also point out that standard deviations around the mean of their metaanalytic validity estimates are often large, leading to their question “why the variability?” (p. x). They suggest many legitimate contributors. We propose an additional moderator variable of critical importance: predictor-criterion construct congruence, accounting for a great deal of variability in validity coefficients found in meta-analysis. That is, the extent to which what is measured is congruent with what is predicted is an important determinant of the level of validity obtained. Sackett et al. (2022) acknowledge that the strongest predictors in their re-analysis are job-specific measures and that a “closer behavioral match between predictor and criterion” (p. 2062) might contribute to higher validities. Many in our field have also noted the importance of “behavioral consistency” between predictors and criteria relevant to selection, while also arguing for another type of congruence: the relationships between constructs in both the predictor and criterion space (e.g., Bartram, 2005; Campbell et al., 1993; Campbell & Knapp, 2001; Hogan & Holland, 2003; Hough, 1992; Hough & Oswald, 2005; Pulakos et al., 1988; Sackett & Lievens, 2008; Schmitt & Ostroff, 1986). The above reflects an important distinction between two types of congruence: behavior-based congruence and construct-based congruence. When ‘past behavior predicts future behavior’ (as might be possible for jobs requiring past experience and where behavior-oriented employment assessments such as interviews, biodata, and work samples are involved), behavior-based congruence exists. Behavior-based assessments can vary a great deal across jobs but tend to ask about past experiences that are influenced by a complex mix of KSAOs. By contrast, constructbased congruence aligns employment tests of job-relevant KSAOs (e.g., verbal and math skills, conscientiousness) with relevant work criteria, such as technical performance or counterproductive work behavior (e.g., Campbell & Wiernik, 2015). What we are suggesting strongly here is that regardless of the approach to congruence adopted in selection, it is the congruence between predictor and criterion constructs that is a key factor","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"307 - 312"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43876729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interpreting validity evidence: It is time to end the horse race 解读有效性证据:是时候结束赛马了
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.27
Kevin Murphy
For almost 25 years, two conclusions arising from a series of meta-analyses (summarized by Schmidt & Hunter, 1998) have been widely accepted in the field of I–O psychology: (a) that cognitive ability tests showed substantial validity as predictors of job performance, with scores on these tests accounting for over 25% of the variance in performance, and (b) cognitive ability tests were among the best predictors of performance and, taking into account their simplicity and broad applicability, were likely to be the starting point for most selection systems. Sackett, Zhang, Berry, and Lievens (2022) challenged these conclusions, showing how unrealistic corrections for range restriction in meta-analyses had led to substantial overestimates of the validity of most tests and assessments and suggesting that cognitive tests were not among the best predictors of performance. Sackett, Zhang, Berry and Lievens (2023) illustrate many implications important of their analysis for the evaluation of selection tests and or developing selection test batteries. Discussions of the validity of alternative predictors of performance often take on the character of a horse race, in which a great deal of attention is given to determining which is the best predictor. From this perspective, one of the messages of Sackett et al. (2022) might be that cognitive ability has been dethroned as the best predictor, and that structured interviews, job knowledge tests, empirically keyed biodata forms and work sample tests are all better choices. In my view, dethroning cognitive ability tests as the best predictor is one of the least important conclusions of the Sackett et al. (2022) review. Although horse races might be fun, the quest to find the best single predictor of performance is arguably pointless because personnel selection is inherently a multivariate problem, not a univariate one. First, personnel selection is virtually never done based on scores on a single test or assessment. There are certainly scenarios where a low score on a single assessment might lead to a negative selection decision; an applicant for a highly selective college who submits a combined SAT score of 560 (320 in Math and 240 in Evidence-Based Reading and Writing) will almost certainly be rejected. However, real-world selection decisions that are based on any type of systematic assessments will usually be based on multiple assessments (e.g., interviews plus tests, biodata plus interviews). More to the point, the criteria that are used to evaluate the validity and value of selection tests are almost certainly multivariate. That is, although selection tests are often validated against supervisory ratings of job performance, they are not designed or used to predict these ratings, which often show uncertain relationships with actual effectiveness in the workplace (Adler et al., 2016; Murphy et al., 2018). Rather, they are used to help organizations make decisions, and assessing the quality of these decisions o
近25年来,一系列荟萃分析得出的两个结论(由Schmidt&Hunter总结,1998)在输入输出心理学领域被广泛接受:(a)认知能力测试作为工作表现的预测因子显示出显著的有效性,这些测试的得分占表现差异的25%以上,(b)认知能力测试是表现的最佳预测因素之一,考虑到其简单性和广泛的适用性,很可能是大多数选择系统的起点。Sackett、Zhang、Berry和Lievens(2022)对这些结论提出了质疑,表明在荟萃分析中对范围限制的不切实际的校正是如何导致对大多数测试和评估的有效性的大幅高估的,并表明认知测试不是表现的最佳预测因素之一。Sackett、Zhang、Berry和Lievens(2023)阐述了他们的分析对选择测试评估和/或开发选择测试电池的许多重要意义。关于表现的替代预测因子的有效性的讨论通常具有赛马的特点,在赛马中,人们非常关注确定哪一个是最佳预测因子。从这个角度来看,Sackett等人(2022)的信息之一可能是,认知能力已被取代为最佳预测因素,结构化面试、工作知识测试、凭经验键入的生物数据表和工作样本测试都是更好的选择。在我看来,取代认知能力测试作为最佳预测指标是Sackett等人(2022)综述中最不重要的结论之一。尽管赛马可能很有趣,但寻找最佳的单一绩效预测指标无疑毫无意义,因为人员选择本质上是一个多变量问题,而不是一个单变量问题。首先,人员选择实际上从来不是基于单一测试或评估的分数。当然,在某些情况下,单一评估的低分数可能会导致负面的选择决定;一个高选择性大学的申请人,如果提交了560分的SAT综合成绩(数学320分,循证阅读和写作240分),几乎肯定会被拒绝。然而,基于任何类型的系统评估的现实世界的选择决策通常都是基于多重评估(例如,访谈加测试、生物数据加访谈)。更重要的是,用于评估选择测试的有效性和价值的标准几乎可以肯定是多变量的。也就是说,尽管选择测试通常根据工作表现的监督评级进行验证,但它们并不是设计或用于预测这些评级的,这些评级往往显示出与工作场所实际有效性的不确定关系(Adler等人,2016;Murphy等人,2018)。相反,它们用于帮助组织做出决策,评估这些决策的质量通常需要考虑多个标准。事实上,所有选择测试有效性的荟萃分析都采用单变量视角,通常考察测试分数和工作表现衡量标准之间的关系(如上所述,通常是监督评级,但有时是客观衡量标准或培训结果衡量标准)。因此,如果有效性通常用单个数字表示(例如,校正的相关性
{"title":"Interpreting validity evidence: It is time to end the horse race","authors":"Kevin Murphy","doi":"10.1017/iop.2023.27","DOIUrl":"https://doi.org/10.1017/iop.2023.27","url":null,"abstract":"For almost 25 years, two conclusions arising from a series of meta-analyses (summarized by Schmidt & Hunter, 1998) have been widely accepted in the field of I–O psychology: (a) that cognitive ability tests showed substantial validity as predictors of job performance, with scores on these tests accounting for over 25% of the variance in performance, and (b) cognitive ability tests were among the best predictors of performance and, taking into account their simplicity and broad applicability, were likely to be the starting point for most selection systems. Sackett, Zhang, Berry, and Lievens (2022) challenged these conclusions, showing how unrealistic corrections for range restriction in meta-analyses had led to substantial overestimates of the validity of most tests and assessments and suggesting that cognitive tests were not among the best predictors of performance. Sackett, Zhang, Berry and Lievens (2023) illustrate many implications important of their analysis for the evaluation of selection tests and or developing selection test batteries. Discussions of the validity of alternative predictors of performance often take on the character of a horse race, in which a great deal of attention is given to determining which is the best predictor. From this perspective, one of the messages of Sackett et al. (2022) might be that cognitive ability has been dethroned as the best predictor, and that structured interviews, job knowledge tests, empirically keyed biodata forms and work sample tests are all better choices. In my view, dethroning cognitive ability tests as the best predictor is one of the least important conclusions of the Sackett et al. (2022) review. Although horse races might be fun, the quest to find the best single predictor of performance is arguably pointless because personnel selection is inherently a multivariate problem, not a univariate one. First, personnel selection is virtually never done based on scores on a single test or assessment. There are certainly scenarios where a low score on a single assessment might lead to a negative selection decision; an applicant for a highly selective college who submits a combined SAT score of 560 (320 in Math and 240 in Evidence-Based Reading and Writing) will almost certainly be rejected. However, real-world selection decisions that are based on any type of systematic assessments will usually be based on multiple assessments (e.g., interviews plus tests, biodata plus interviews). More to the point, the criteria that are used to evaluate the validity and value of selection tests are almost certainly multivariate. That is, although selection tests are often validated against supervisory ratings of job performance, they are not designed or used to predict these ratings, which often show uncertain relationships with actual effectiveness in the workplace (Adler et al., 2016; Murphy et al., 2018). Rather, they are used to help organizations make decisions, and assessing the quality of these decisions o","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"341 - 343"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46896890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
To correct or not to correct for range restriction, that is the question: Looking back and ahead to move forward 纠正或不纠正范围限制,这是一个问题:回顾过去,向前迈进
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.38
In-Sue Oh, Jorge Mendoza, H. Le
Sackett et al. (2023) start their focal article by stating that they identified “previously unnoticed flaws” in range restriction (RR) corrections in most validity generalization (VG) meta-analyses of selection procedures reviewed in their 2022 article. Following this provocative opening statement, they discuss how both researchers and practitioners have handled (and should handle) RR corrections in estimating the operational validity of a selection procedure in both VG metaanalyses (whose input studies are predominantly concurrent studies) and individual validation studies (which serve as input to VG meta-analyses). The purpose of this commentary is twofold. We first provide an essential review of Sackett et al.’s (2022) three propositions serving as the major rationales for their recommendations regarding RR corrections (e.g., no corrections for RR in concurrent validation studies). We then provide our critical analyses of their rationales and recommendations regarding RR corrections to put them in perspective, along with some additional thoughts.
Sackett等人(2023)在其重点文章的开头指出,他们在2022年的文章中回顾的大多数效度概括(VG)荟萃分析中,在范围限制(RR)修正中发现了“以前未被注意到的缺陷”。在这一具有煽动性的开场白之后,他们讨论了研究人员和实践者在VG元分析(其输入研究主要是并发研究)和个体验证研究(作为VG元分析的输入)中评估选择程序的操作有效性时如何处理(并且应该处理)RR修正。这篇评论的目的是双重的。我们首先对Sackett等人(2022)的三个命题进行了基本回顾,这些命题作为他们关于RR修正建议的主要依据(例如,在并发验证研究中不修正RR)。然后,我们对他们的基本原理和关于RR修正的建议进行批判性分析,以正确看待他们,以及一些额外的想法。
{"title":"To correct or not to correct for range restriction, that is the question: Looking back and ahead to move forward","authors":"In-Sue Oh, Jorge Mendoza, H. Le","doi":"10.1017/iop.2023.38","DOIUrl":"https://doi.org/10.1017/iop.2023.38","url":null,"abstract":"Sackett et al. (2023) start their focal article by stating that they identified “previously unnoticed flaws” in range restriction (RR) corrections in most validity generalization (VG) meta-analyses of selection procedures reviewed in their 2022 article. Following this provocative opening statement, they discuss how both researchers and practitioners have handled (and should handle) RR corrections in estimating the operational validity of a selection procedure in both VG metaanalyses (whose input studies are predominantly concurrent studies) and individual validation studies (which serve as input to VG meta-analyses). The purpose of this commentary is twofold. We first provide an essential review of Sackett et al.’s (2022) three propositions serving as the major rationales for their recommendations regarding RR corrections (e.g., no corrections for RR in concurrent validation studies). We then provide our critical analyses of their rationales and recommendations regarding RR corrections to put them in perspective, along with some additional thoughts.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"322 - 327"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47349958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A response to speculations about concurrent validities in selection: Implications for cognitive ability 对选择中并发效度猜测的回应:对认知能力的影响
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.43
D. Ones, C. Viswesvaran
Although we have many important areas of agreement with Sackett and colleagues1, we must address two issues that form the backbone of the focal article. First, we explain why range restriction corrections in concurrent validation are appropriate, describing the conceptual basis for range restriction corrections, and highlighting some pertinent technical issues that should elicit skepticism about the focal article’s assertions. Second, we disagree with the assertion that the operational validity of cognitive ability is much lower than previously reported. We conclude with some implications for applied practice.
尽管我们与Sackett及其同事在许多重要领域达成了一致1,但我们必须解决构成重点文章支柱的两个问题。首先,我们解释了为什么并行验证中的范围限制更正是合适的,描述了范围限制更正的概念基础,并强调了一些相关的技术问题,这些问题应该引起对焦点文章断言的怀疑。其次,我们不同意认知能力的操作有效性比以前报道的要低得多的说法。最后,我们对应用实践提出了一些启示。
{"title":"A response to speculations about concurrent validities in selection: Implications for cognitive ability","authors":"D. Ones, C. Viswesvaran","doi":"10.1017/iop.2023.43","DOIUrl":"https://doi.org/10.1017/iop.2023.43","url":null,"abstract":"Although we have many important areas of agreement with Sackett and colleagues1, we must address two issues that form the backbone of the focal article. First, we explain why range restriction corrections in concurrent validation are appropriate, describing the conceptual basis for range restriction corrections, and highlighting some pertinent technical issues that should elicit skepticism about the focal article’s assertions. Second, we disagree with the assertion that the operational validity of cognitive ability is much lower than previously reported. We conclude with some implications for applied practice.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"358 - 365"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47757621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Is it also time to revisit situational specificity? 是不是也该重新审视情境特殊性了?
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.40
J. DeSimone, T. Fezzey
Sackett et al.’s (2023) focal article asserts that the predictors with the highest criterion-related validity in selection settings are specific to individual jobs and emphasizes the importance of adjusting for range restriction (and attenuation) using study-specific artifact estimates. These positions, along with other recent perspectives on meta-analysis, lead us to reassess the extent to which situational specificity (SS) is worth consideration in organizational selection contexts. In this commentary, we will (a) examine the historical context of both the SS and validity generalization (VG) perspectives, (b) evaluate evidence pertaining to these perspectives, and (c) consider whether it is possible for both perspectives to coexist.
Sackett等人(2023)的重点文章断言,在选择环境中具有最高标准相关有效性的预测因子是特定于个体工作的,并强调了使用研究特定伪影估计调整范围限制(和衰减)的重要性。这些立场,以及最近对荟萃分析的其他观点,使我们重新评估在组织选择背景下情境特异性(SS)值得考虑的程度。在这篇评论中,我们将(a)研究SS和有效性概括(VG)观点的历史背景,(b)评估与这些观点相关的证据,以及(c)考虑两种观点是否可能共存。
{"title":"Is it also time to revisit situational specificity?","authors":"J. DeSimone, T. Fezzey","doi":"10.1017/iop.2023.40","DOIUrl":"https://doi.org/10.1017/iop.2023.40","url":null,"abstract":"Sackett et al.’s (2023) focal article asserts that the predictors with the highest criterion-related validity in selection settings are specific to individual jobs and emphasizes the importance of adjusting for range restriction (and attenuation) using study-specific artifact estimates. These positions, along with other recent perspectives on meta-analysis, lead us to reassess the extent to which situational specificity (SS) is worth consideration in organizational selection contexts. In this commentary, we will (a) examine the historical context of both the SS and validity generalization (VG) perspectives, (b) evaluate evidence pertaining to these perspectives, and (c) consider whether it is possible for both perspectives to coexist.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"317 - 321"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44291526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the undervaluing of diversity in the validity–diversity tradeoff consideration 论有效性-多样性权衡中对多样性的低估
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.29
J. Olenick, Ajay V. Somaraju
Sackett et al. (2023) provide a useful more practice-oriented discussion of Sackett et al. (2022) report that reexamined meta-analytic corrections for a wide variety of selection tools, across common content and process domains. We expand on their discussion of implications regarding the new validity estimates for the classic validity – diversity tradeoff by arguing that the benefits of diversity are still underestimated when assessing this tradeoff. To be fair, this issue is not limited to Sackett et al. ’ s efforts but rather represents a shortcoming of the field at large. Regardless, these limitations mean that if diversity benefits were better understood by the field and properly accounted for in tradeoff estimates, even greater reductions in the usefulness of predictors with high group mean differences would likely be observed. We make three key points. First, we argue that the benefits of group diversity are not included in selection decisions, leading to underestimations of diversity benefits. Second, we elaborate on the central role of interdependence as a condition that maximizes the importance of diversity. Finally, we connect these issues to the long-term implications of assessment decisions containing adverse impact.
Sackett等人(2023)对Sackett等人(2022)的报告进行了有益的、更以实践为导向的讨论,该报告重新检查了跨共同内容和过程域的各种选择工具的元分析更正。我们扩展了他们对经典效度-多样性权衡的新效度估计的影响的讨论,认为在评估这种权衡时,多样性的好处仍然被低估了。公平地说,这个问题并不局限于Sackett等人的努力,而是代表了整个领域的一个缺点。无论如何,这些限制意味着,如果该领域更好地理解多样性的好处,并在权衡估计中适当地考虑到这一点,那么可能会观察到具有高组平均差异的预测因子的有用性会进一步降低。我们提出了三个关键点。首先,我们认为群体多样性的好处没有包括在选择决策中,导致低估多样性的好处。第二,我们详细阐述了相互依存作为使多样性的重要性最大化的一个条件的中心作用。最后,我们将这些问题与包含不利影响的评估决策的长期影响联系起来。
{"title":"On the undervaluing of diversity in the validity–diversity tradeoff consideration","authors":"J. Olenick, Ajay V. Somaraju","doi":"10.1017/iop.2023.29","DOIUrl":"https://doi.org/10.1017/iop.2023.29","url":null,"abstract":"Sackett et al. (2023) provide a useful more practice-oriented discussion of Sackett et al. (2022) report that reexamined meta-analytic corrections for a wide variety of selection tools, across common content and process domains. We expand on their discussion of implications regarding the new validity estimates for the classic validity – diversity tradeoff by arguing that the benefits of diversity are still underestimated when assessing this tradeoff. To be fair, this issue is not limited to Sackett et al. ’ s efforts but rather represents a shortcoming of the field at large. Regardless, these limitations mean that if diversity benefits were better understood by the field and properly accounted for in tradeoff estimates, even greater reductions in the usefulness of predictors with high group mean differences would likely be observed. We make three key points. First, we argue that the benefits of group diversity are not included in selection decisions, leading to underestimations of diversity benefits. Second, we elaborate on the central role of interdependence as a condition that maximizes the importance of diversity. Finally, we connect these issues to the long-term implications of assessment decisions containing adverse impact.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"353 - 357"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45157067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rumors of general mental ability’s demise are the next red herring 下一个转移注意力的是关于一般智力衰退的传言
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.37
Jeffrey M. Cucina, Theodore L. Hayes
In this paper we focus on the lowered validity for general mental ability (GMA) tests by presenting: (a) a history of the range restriction correction controversy; (b) a review of validity evidence using various criteria; and (c) multiple paradoxes that arise with a lower GMA validity
在这篇论文中,我们通过以下内容来关注GMA测试的有效性降低:(a)范围限制校正争议的历史;(b) 使用各种标准对有效性证据进行审查;以及(c)GMA有效性较低时出现的多重悖论
{"title":"Rumors of general mental ability’s demise are the next red herring","authors":"Jeffrey M. Cucina, Theodore L. Hayes","doi":"10.1017/iop.2023.37","DOIUrl":"https://doi.org/10.1017/iop.2023.37","url":null,"abstract":"In this paper we focus on the lowered validity for general mental ability (GMA) tests by presenting: (a) a history of the range restriction correction controversy; (b) a review of validity evidence using various criteria; and (c) multiple paradoxes that arise with a lower GMA validity","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"301 - 306"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42756865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Structured interviews: moving beyond mean validity… 结构化面试:超越平均效度……
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.42
Allen I. Huffcutt, S. Murphy
As interview researchers, we were of course delighted by the focal authors’ finding that structured interviews emerged as the predictor with the highest mean validity in their meta-analysis (Sackett et al., 2023, Table 1). Moreover, they found that structured interviews not only provide strong validity but do so while having significantly lower impact on racial groups than other top predictors such as biodata, knowledge, work samples, assessment centers, and GMA (see their Figure 1). Unfortunately, it also appears that structured interviews have the highest variability in validity (i.e., .42 +/− .24) among top predictors (Sackett et al., 2023; Table 1). Such a level of inconsistency is concerning and warrants closer examination. Given that the vast majority of interview research (including our own) has focused on understanding and improving mean validity as opposed to reducing variability, we advocate for a fundamental shift in focus. Specifically, we call for more research on identifying factors that can induce variability in validity and, subsequently, on finding ways to minimize their influence. Our commentary will highlight several prominent factors that have the potential to contribute significantly to the inconsistency in validity. We group them according to three major components of the interview process: interview format/methodology, applicant cognitive processes, and contextual factors.
作为访谈研究人员,我们当然对焦点作者的发现感到高兴,即结构化访谈在他们的荟萃分析中成为平均有效性最高的预测因子(Sackett等人,2023,表1)。此外,他们发现结构化访谈不仅提供了很强的有效性,而且对种族群体的影响明显低于其他顶级预测因素,如生物数据、知识、工作样本、评估中心和GMA(见图1)。不幸的是,在顶级预测因素中,结构化访谈的有效性变异性最高(即.42+/-.24)(Sackett等人,2023;表1)。这种程度的不一致令人担忧,值得进一步研究。鉴于绝大多数访谈研究(包括我们自己的)都专注于理解和提高平均有效性,而不是减少可变性,我们主张从根本上转移关注点。具体而言,我们呼吁对识别可能导致有效性变异的因素进行更多的研究,并随后寻找将其影响降至最低的方法。我们的评论将强调几个突出的因素,这些因素有可能对有效性的不一致性做出重大贡献。我们根据面试过程的三个主要组成部分对他们进行分组:面试形式/方法、申请人的认知过程和背景因素。
{"title":"Structured interviews: moving beyond mean validity…","authors":"Allen I. Huffcutt, S. Murphy","doi":"10.1017/iop.2023.42","DOIUrl":"https://doi.org/10.1017/iop.2023.42","url":null,"abstract":"As interview researchers, we were of course delighted by the focal authors’ finding that structured interviews emerged as the predictor with the highest mean validity in their meta-analysis (Sackett et al., 2023, Table 1). Moreover, they found that structured interviews not only provide strong validity but do so while having significantly lower impact on racial groups than other top predictors such as biodata, knowledge, work samples, assessment centers, and GMA (see their Figure 1). Unfortunately, it also appears that structured interviews have the highest variability in validity (i.e., .42 +/− .24) among top predictors (Sackett et al., 2023; Table 1). Such a level of inconsistency is concerning and warrants closer examination. Given that the vast majority of interview research (including our own) has focused on understanding and improving mean validity as opposed to reducing variability, we advocate for a fundamental shift in focus. Specifically, we call for more research on identifying factors that can induce variability in validity and, subsequently, on finding ways to minimize their influence. Our commentary will highlight several prominent factors that have the potential to contribute significantly to the inconsistency in validity. We group them according to three major components of the interview process: interview format/methodology, applicant cognitive processes, and contextual factors.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"344 - 348"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44269716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Polyculturalism as a multilevel phenomenon 多元文化主义是一个多层次的现象
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.41
Suzette Caleo, Daniel S. Whitman
{"title":"Polyculturalism as a multilevel phenomenon","authors":"Suzette Caleo, Daniel S. Whitman","doi":"10.1017/iop.2023.41","DOIUrl":"https://doi.org/10.1017/iop.2023.41","url":null,"abstract":"","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"401 - 404"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48251148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Going beyond a validity focus to accommodate megatrends in selection system design 超越有效性焦点以适应选拔系统设计的大趋势
IF 15.8 3区 心理学 Q1 PSYCHOLOGY, APPLIED Pub Date : 2023-08-31 DOI: 10.1017/iop.2023.28
John W. Jones, M. Cunningham
Sackett, Zhang, Berry, and Lievens (2023) are to be commended for correcting the validity estimates of widely used predictors, many of which turned out to have less validity than prior studies led us to believe. Yet, we should recognize that psychologists and their clients were misled for many years about the utility of some mainstream assessments and selection system design surely suffered. Although Sackett et al. (2023) offered useful recommendations for researchers, they never really addressed selection system design from a practitioner perspective. This response aims to address that omission, emphasizing a multidimensional approach to design science (Casillas et al., 2019).
Sackett、Zhang、Berry和Lievens(2023)因校正了广泛使用的预测因子的有效性估计而受到赞扬,其中许多预测因子的正确性低于我们之前的研究。然而,我们应该认识到,心理学家和他们的客户多年来一直被误导,认为一些主流评估和选拔系统设计的效用肯定会受到影响。尽管Sackett等人(2023)为研究人员提供了有用的建议,但他们从未真正从从业者的角度讨论过选择系统的设计。这一回应旨在解决这一遗漏,强调设计科学的多维方法(Casillas等人,2019)。
{"title":"Going beyond a validity focus to accommodate megatrends in selection system design","authors":"John W. Jones, M. Cunningham","doi":"10.1017/iop.2023.28","DOIUrl":"https://doi.org/10.1017/iop.2023.28","url":null,"abstract":"Sackett, Zhang, Berry, and Lievens (2023) are to be commended for correcting the validity estimates of widely used predictors, many of which turned out to have less validity than prior studies led us to believe. Yet, we should recognize that psychologists and their clients were misled for many years about the utility of some mainstream assessments and selection system design surely suffered. Although Sackett et al. (2023) offered useful recommendations for researchers, they never really addressed selection system design from a practitioner perspective. This response aims to address that omission, emphasizing a multidimensional approach to design science (Casillas et al., 2019).","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":"16 1","pages":"336 - 340"},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43443722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Industrial and Organizational Psychology-Perspectives on Science and Practice
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1