We commend the focal article by Dupré and Wille (2024) that introduces personality development goals at work. Yet, to many organizational researchers, this may strike as a bold proposal given its novelty and provocative nature. Seeing the potential of this proposal, we offer discussions on potential theoretical and methodological challenges that researchers, who are eager to advance this line of research, may encounter. We encourage future research to tackle these issues in order to further advance theoretical developments and practical applications of personality development at work.
{"title":"Personality development goals at work: Would a new assessment tool help?","authors":"Wen-Dong Li, Jing Hu, Jiexin Wang","doi":"10.1111/ijsa.12498","DOIUrl":"10.1111/ijsa.12498","url":null,"abstract":"<p>We commend the focal article by Dupré and Wille (2024) that introduces personality development goals at work. Yet, to many organizational researchers, this may strike as a bold proposal given its novelty and provocative nature. Seeing the potential of this proposal, we offer discussions on potential theoretical and methodological challenges that researchers, who are eager to advance this line of research, may encounter. We encourage future research to tackle these issues in order to further advance theoretical developments and practical applications of personality development at work.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12498","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Job interviews involve an exchange of information between interviewers and applicants to assess fit from each side. But current frameworks on interviewers' job previews and applicants' self-presentation do not completely capture these exchange processes. Using a grounded theory approach, we developed a theoretical model that spans both literatures by showing the complex relationships between job previews and self-presentation in the interview. Our study also introduces a new way of categorizing applicant self-presentation and reveals why interviewers and applicants choose to use certain strategies. Based on 43 qualitative interviews with applicants and interviewers, we identified five dominant applicant self-presentation responses to job preview information: Receding from the Application Process, Reciprocating Reality, Exploiting the RJP, Resisting in Defiance, and Reciprocating Illusion. Furthermore, we found that applicants present many versions of themselves that not only include their actual, favorable, and ought self but also their anticipated-future self. We also identify interviewers' and applicants' conflicting motives for presenting reality and illusion. Our work provides a deeper understanding of job previews and self-presentation by providing a big-picture, yet fine-grained examination of the communication processes from the viewpoint of the applicant and the interviewer, illustrating implications for both parties and proposing new avenues for research.
{"title":"Reality or illusion: A qualitative study on interviewer job previews and applicant self-presentation","authors":"Annika Schmitz-Wilhelmy, Donald M. Truxillo","doi":"10.1111/ijsa.12495","DOIUrl":"10.1111/ijsa.12495","url":null,"abstract":"<p>Job interviews involve an exchange of information between interviewers and applicants to assess fit from each side. But current frameworks on interviewers' job previews and applicants' self-presentation do not completely capture these exchange processes. Using a grounded theory approach, we developed a theoretical model that spans both literatures by showing the complex relationships between job previews and self-presentation in the interview. Our study also introduces a new way of categorizing applicant self-presentation and reveals why interviewers and applicants choose to use certain strategies. Based on 43 qualitative interviews with applicants and interviewers, we identified five dominant applicant self-presentation responses to job preview information: Receding from the Application Process, Reciprocating Reality, Exploiting the RJP, Resisting in Defiance, and Reciprocating Illusion. Furthermore, we found that applicants present many versions of themselves that not only include their actual, favorable, and ought self but also their anticipated-future self. We also identify interviewers' and applicants' conflicting motives for presenting reality and illusion. Our work provides a deeper understanding of job previews and self-presentation by providing a big-picture, yet fine-grained examination of the communication processes from the viewpoint of the applicant and the interviewer, illustrating implications for both parties and proposing new avenues for research.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12495","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141649213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personality testing is a critical component of organizational assessment and selection processes. Despite nearly a century of research recognizing faking as a concern in personality assessment, the impact of order effects on faking has not been thoroughly examined. This study investigates whether the sequence of administering personality and cognitive ability measures affects the extent of faking. Previous research suggests administering personality measures early in the assessment process to mitigate adverse impact; however, models of faking behavior and signaling theory imply that test order could influence faking. In two simulated applicant laboratory studies (Study 1 N = 172, Study 2 N = 174), participants were randomly assigned to complete personality measures either before or after cognitive ability tests. Results indicate that participants who completed personality assessments first exhibited significantly higher levels of faking compared to those who took cognitive ability tests first. These findings suggest that the order of test administration influences faking, potentially due to the expenditure of cognitive resources during cognitive ability assessments. To enhance the integrity of selection procedures, administrators should consider the sequence of test administration to mitigate faking and improve the accuracy of personality assessments. This study also underscores the need for continued exploration of contextual factors influencing faking behavior. Future research should investigate the mechanisms driving these order effects and develop strategies to reduce faking in personality assessments.
人格测试是组织评估和选拔过程的重要组成部分。尽管近一个世纪以来的研究已经认识到,在人格测评中作假是一个令人担忧的问题,但顺序效应对作假的影响尚未得到深入研究。本研究调查了实施人格测量和认知能力测量的顺序是否会影响造假的程度。以往的研究建议在测评过程的早期进行人格测评,以减轻不利影响;然而,造假行为模型和信号理论暗示,测试顺序可能会影响造假行为。在两项模拟申请人实验室研究(研究 1 N = 172,研究 2 N = 174)中,参与者被随机分配在认知能力测试之前或之后完成人格测评。结果表明,与先进行认知能力测试的受试者相比,先完成人格测评的受试者表现出明显更高的作假水平。这些研究结果表明,施测顺序会影响造假,这可能是由于认知能力测评过程中消耗了认知资源。为了提高选拔程序的公正性,管理者应考虑施测顺序,以减少作假现象,提高人格测评的准确性。本研究还强调了继续探索影响造假行为的背景因素的必要性。未来的研究应该调查这些顺序效应的驱动机制,并制定策略来减少人格测评中的作假行为。
{"title":"Assessment order and faking behavior","authors":"Brett L. Wallace, Gary N. Burns","doi":"10.1111/ijsa.12496","DOIUrl":"10.1111/ijsa.12496","url":null,"abstract":"<p>Personality testing is a critical component of organizational assessment and selection processes. Despite nearly a century of research recognizing faking as a concern in personality assessment, the impact of order effects on faking has not been thoroughly examined. This study investigates whether the sequence of administering personality and cognitive ability measures affects the extent of faking. Previous research suggests administering personality measures early in the assessment process to mitigate adverse impact; however, models of faking behavior and signaling theory imply that test order could influence faking. In two simulated applicant laboratory studies (Study 1 <i>N</i> = 172, Study 2 <i>N</i> = 174), participants were randomly assigned to complete personality measures either before or after cognitive ability tests. Results indicate that participants who completed personality assessments first exhibited significantly higher levels of faking compared to those who took cognitive ability tests first. These findings suggest that the order of test administration influences faking, potentially due to the expenditure of cognitive resources during cognitive ability assessments. To enhance the integrity of selection procedures, administrators should consider the sequence of test administration to mitigate faking and improve the accuracy of personality assessments. This study also underscores the need for continued exploration of contextual factors influencing faking behavior. Future research should investigate the mechanisms driving these order effects and develop strategies to reduce faking in personality assessments.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayley I. Moore, Patrick D. Dunlop, Djurre Holtrop, Marylène Gagné
Some research suggests that job applicants tend to express negative perceptions of asynchronous video interviews (AVIs). Drawing from basic psychological needs theory, we proposed that these negative perceptions arise partly from the lack of human interaction between applicants and the organization during an AVI, which fails to satisfy applicants' need for relatedness. Recruiting participants through Prolific, we conducted two experimental studies that aimed to manipulate the level of relatedness support through a relatedness-need supportive introductory video containing empathetic messaging and humor. Using a vignette approach, participants in study 1 (N = 100) evaluated a hypothetical AVI that included one of two introductory videos: relatedness-supportive versus neutral messaging. The relatedness-supportive video yielded higher relatedness need satisfaction (d = 0.53) and organizational attraction ratings (d = 0.49) than the neutral video. In study 2, participants (N = 231) completed an AVI that included one of the two videos and evaluated their AVI experience. In contrast to the vignette study, we observed no significant differences between groups for relatedness need satisfaction, organizational attraction, nor other outcomes. Our findings provided little evidence that humor and empathic video messaging improves reactions to an AVI and illustrated the limitations on the external validity of vignette designs.
一些研究表明,求职者倾向于对异步视频面试(AVI)表达负面看法。根据基本心理需求理论,我们提出这些负面看法的部分原因是,在异步视频面试过程中,求职者与组织之间缺乏人际互动,无法满足求职者对相关性的需求。我们通过 Prolific 招募参与者,并进行了两项实验研究,旨在通过包含移情信息和幽默的关联性需求支持介绍视频来操纵关联性支持的水平。研究 1 的参与者(N = 100)采用小插图的方法,对一个假设的 AVI 进行了评估,该 AVI 包含两个介绍性视频中的一个:相关性支持视频和中性信息视频。与中性视频相比,支持关联性的视频获得了更高的关联性需求满意度(d = 0.53)和组织吸引力评分(d = 0.49)。在研究 2 中,参与者(N = 231)完成了包含两个视频之一的 AVI,并对他们的 AVI 体验进行了评估。与小故事研究相反,我们观察到各组之间在亲缘需求满足度、组织吸引力和其他结果方面没有显著差异。我们的研究结果几乎没有证明幽默和移情视频信息能改善对 AVI 的反应,同时也说明了小故事设计的外部有效性的局限性。
{"title":"I can't get no (need) satisfaction: Using a relatedness need-supportive intervention to improve applicant reactions to asynchronous video interviews","authors":"Hayley I. Moore, Patrick D. Dunlop, Djurre Holtrop, Marylène Gagné","doi":"10.1111/ijsa.12493","DOIUrl":"10.1111/ijsa.12493","url":null,"abstract":"<p>Some research suggests that job applicants tend to express negative perceptions of asynchronous video interviews (AVIs). Drawing from basic psychological needs theory, we proposed that these negative perceptions arise partly from the lack of human interaction between applicants and the organization during an AVI, which fails to satisfy applicants' need for <i>relatedness</i>. Recruiting participants through Prolific, we conducted two experimental studies that aimed to manipulate the level of relatedness support through a relatedness-need supportive introductory video containing empathetic messaging and humor. Using a vignette approach, participants in study 1 (<i>N</i> = 100) evaluated a hypothetical AVI that included one of two introductory videos: relatedness-supportive versus neutral messaging. The relatedness-supportive video yielded higher relatedness need satisfaction (<i>d</i> = 0.53) and organizational attraction ratings (<i>d</i> = 0.49) than the neutral video. In study 2, participants (<i>N</i> = 231) completed an AVI that included one of the two videos and evaluated their AVI experience. In contrast to the vignette study, we observed no significant differences between groups for relatedness need satisfaction, organizational attraction, nor other outcomes. Our findings provided little evidence that humor and empathic video messaging improves reactions to an AVI and illustrated the limitations on the external validity of vignette designs.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141614564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy G. Wingate, Joshua S. Bourdage, Piers Steel
The employment interview is used to assess myriad constructs to inform personnel selection decisions. This article describes the first meta-analytic review of the criterion-related validity of interview-based assessments of specific constructs (i.e., related to task and contextual performance). As such, this study explores the suitability of the interview for predicting specific dimensions of performance, and furthermore, if and how interviews should be designed to inform the assessment of distinct constructs. A comprehensive search process identified k = 37 studies comprising N = 30,646 participants (N = 4449 with the removal of one study). Results suggest that constructs related to task (ρ = .30) and contextual (ρ = .28) performance are assessed with similar levels of criterion-related validity. Although interview evaluations of task and contextual performance constructs did not show discriminant validity within the interview itself, interview evaluations were more predictive of the targeted criterion construct than of alternative constructs. We further found evidence that evaluations of contextual performance constructs might particularly benefit from the adoption of more structured interview scoring procedures. However, we expect that new research on interview design factors may find additional moderating effects and we point to critical gaps in our current body of literature on employment interviews. These results illustrate how a construct-specific approach to interview validity can spur new developments in the modeling, assessment, and selection of specific work performance constructs.
就业面谈被用来评估各种构念,为人员甄选决策提供依据。本文首次对基于面试的特定结构(即与任务和环境绩效相关的结构)评估的标准相关有效性进行了元分析回顾。因此,本研究探讨了面试是否适合预测绩效的特定维度,并进一步探讨了是否应该以及如何设计面试,以便为评估不同的构建提供信息。通过全面的搜索过程,我们发现了 k = 37 项研究,包括 N = 30,646 名参与者(去掉一项研究后,N = 4449)。结果表明,与任务(ρ = .30)和情境(ρ = .28)绩效相关的建构评估具有相似水平的标准相关有效性。虽然对任务和情境绩效构式的访谈评价在访谈本身中没有显示出区分有效性,但访谈评价对目标标准构式的预测性要高于对其他构式的预测性。我们还发现有证据表明,如果采用更有条理的访谈评分程序,对情境绩效结构的评价可能会特别受益。不过,我们预计,对面试设计因素的新研究可能会发现更多的调节作用,我们也指出了当前就业面试文献中的重要空白。这些结果表明,针对特定建构的面谈有效性方法可以促进特定工作绩效建构的建模、评估和选择方面的新发展。
{"title":"Evaluating interview criterion-related validity for distinct constructs: A meta-analysis","authors":"Timothy G. Wingate, Joshua S. Bourdage, Piers Steel","doi":"10.1111/ijsa.12494","DOIUrl":"10.1111/ijsa.12494","url":null,"abstract":"<p>The employment interview is used to assess myriad constructs to inform personnel selection decisions. This article describes the first meta-analytic review of the criterion-related validity of interview-based assessments of specific constructs (i.e., related to task and contextual performance). As such, this study explores the suitability of the interview for predicting specific dimensions of performance, and furthermore, if and how interviews should be designed to inform the assessment of distinct constructs. A comprehensive search process identified <i>k</i> = 37 studies comprising <i>N</i> = 30,646 participants (<i>N</i> = 4449 with the removal of one study). Results suggest that constructs related to task (<i>ρ</i> = .30) and contextual (<i>ρ</i> = .28) performance are assessed with similar levels of criterion-related validity. Although interview evaluations of task and contextual performance constructs did not show discriminant validity within the interview itself, interview evaluations were more predictive of the targeted criterion construct than of alternative constructs. We further found evidence that evaluations of contextual performance constructs might particularly benefit from the adoption of more structured interview scoring procedures. However, we expect that new research on interview design factors may find additional moderating effects and we point to critical gaps in our current body of literature on employment interviews. These results illustrate how a construct-specific approach to interview validity can spur new developments in the modeling, assessment, and selection of specific work performance constructs.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12494","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malcolm Sehlström, Jessica K. Ljungberg, Markus B. T. Nyström, Anna-Sara Claeson
Improved understanding of what it takes to be a pilot is an ongoing effort within aviation. We used an exploratory approach to examine whether there are personality-related differences in who completes the Swedish military pilot education. Assessment records of 182 applicants, accepted to the education between the years of 2004 and 2020 were studied (Mean age 24, SD 4.2 96% men, 4% women). Discriminant analysis was used to explore which personality traits and suitability ratings might be related to education completion. Analysis included suitability assessments made by senior pilots and by a psychologist, a number of traits assessed by the same psychologist, as well as the Commander Trait Inventory (CTI). The resulting discriminant function was significant (Wilk's Lambda = 0.808, (20) = 32.817, p = .035) with a canonical correlation of 0.44. The model was able to classify 74.1% of sample cases correctly. The modeling suggests that senior pilot assessment and psychologist assessment both predict education completion. Also contributing were the traits energy, professional motivation, study forecast and leader potential.
{"title":"Relations of personality factors and suitability ratings to Swedish military pilot education completion","authors":"Malcolm Sehlström, Jessica K. Ljungberg, Markus B. T. Nyström, Anna-Sara Claeson","doi":"10.1111/ijsa.12492","DOIUrl":"10.1111/ijsa.12492","url":null,"abstract":"<p>Improved understanding of what it takes to be a pilot is an ongoing effort within aviation. We used an exploratory approach to examine whether there are personality-related differences in who completes the Swedish military pilot education. Assessment records of 182 applicants, accepted to the education between the years of 2004 and 2020 were studied (Mean age 24, SD 4.2 96% men, 4% women). Discriminant analysis was used to explore which personality traits and suitability ratings might be related to education completion. Analysis included suitability assessments made by senior pilots and by a psychologist, a number of traits assessed by the same psychologist, as well as the Commander Trait Inventory (CTI). The resulting discriminant function was significant (Wilk's Lambda = 0.808, (20) = 32.817, <i>p</i> = .035) with a canonical correlation of 0.44. The model was able to classify 74.1% of sample cases correctly. The modeling suggests that senior pilot assessment and psychologist assessment both predict education completion. Also contributing were the traits energy, professional motivation, study forecast and leader potential.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12492","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) chatbots, such as Chat Generative Pre-trained Transformer (ChatGPT), may threaten the validity of selection processes. This study provides the first examination of how AI cheating in the asynchronous video interview (AVI) may impact interview performance and applicant reactions. In a preregistered experiment, Prolific respondents (N = 245) completed an AVI after being randomly assigned to a non-ChatGPT, ChatGPT-Verbatim (read AI-generated responses word-for-word), or ChatGPT-Personalized condition (provided their résumé/contextual instructions to ChatGPT and modified the AI-generated responses). The ChatGPT conditions received considerably higher scores on overall performance and content than the non-ChatGPT condition. However, response delivery ratings did not differ between conditions and the ChatGPT conditions received lower honesty ratings. Both ChatGPT conditions rated the AVI as lower on procedural justice than the non-ChatGPT condition.
{"title":"ChatGPT, can you take my job interview? Examining artificial intelligence cheating in the asynchronous video interview","authors":"Damian Canagasuriam, Eden-Raye Lukacik","doi":"10.1111/ijsa.12491","DOIUrl":"10.1111/ijsa.12491","url":null,"abstract":"<p>Artificial intelligence (AI) chatbots, such as Chat Generative Pre-trained Transformer (ChatGPT), may threaten the validity of selection processes. This study provides the first examination of how AI cheating in the asynchronous video interview (AVI) may impact interview performance and applicant reactions. In a preregistered experiment, Prolific respondents (<i>N</i> = 245) completed an AVI after being randomly assigned to a non-ChatGPT, ChatGPT-Verbatim (read AI-generated responses word-for-word), or ChatGPT-Personalized condition (provided their résumé/contextual instructions to ChatGPT and modified the AI-generated responses). The ChatGPT conditions received considerably higher scores on overall performance and content than the non-ChatGPT condition. However, response delivery ratings did not differ between conditions and the ChatGPT conditions received lower honesty ratings. Both ChatGPT conditions rated the AVI as lower on procedural justice than the non-ChatGPT condition.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12491","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological Capital (PsyCap) represents an individual's positive and resourceful state, defined by high levels of self-efficacy, optimism, hope, and resiliency. Since its inception, extensive research has focused on exploring the factors influencing and outcomes associated with PsyCap within organizational contexts. Consequently, there has been a growing demand for reliable assessment tools to measure PsyCap accurately. The present multi-study investigation aimed to examine whether the two main measures of Psychological Capital, namely the Psychological Capital Questionnaire and the Implicit-Psychological Capital Questionnaire, show convergence in measuring the same underlying construct. In Study 1, using data from 327 employees from whom we obtained both self- and coworker reports on both explicit and implicit Psychological Capital, we evaluated the degree of convergence between measures using a Multitrait-Multimethod approach. In Study 2, we used six-wave longitudinal data from 354 employees, gathered every week for 6 consecutive weeks, to test a series of STARTS models, to decompose the proportions of variance of all the components (i.e., trait, state and error) of both Psychological Capital measures, and to compare their magnitude and similarity. In this second study, we also compared their longitudinal predictive power with respect to important organizational outcomes (i.e., work engagement and emotional exhaustion). All in all, results provided empirical evidence for the high degree of convergence of explicit and implicit measures of Psychological Capital. Implications and potential applications of our findings are discussed.
{"title":"Equivalence between direct and indirect measures of psychological capital","authors":"Guido Alessandri, Lorenzo Filosa","doi":"10.1111/ijsa.12488","DOIUrl":"10.1111/ijsa.12488","url":null,"abstract":"<p>Psychological Capital (PsyCap) represents an individual's positive and resourceful state, defined by high levels of self-efficacy, optimism, hope, and resiliency. Since its inception, extensive research has focused on exploring the factors influencing and outcomes associated with PsyCap within organizational contexts. Consequently, there has been a growing demand for reliable assessment tools to measure PsyCap accurately. The present multi-study investigation aimed to examine whether the two main measures of Psychological Capital, namely the Psychological Capital Questionnaire and the Implicit-Psychological Capital Questionnaire, show convergence in measuring the same underlying construct. In Study 1, using data from 327 employees from whom we obtained both self- and coworker reports on both explicit and implicit Psychological Capital, we evaluated the degree of convergence between measures using a Multitrait-Multimethod approach. In Study 2, we used six-wave longitudinal data from 354 employees, gathered every week for 6 consecutive weeks, to test a series of STARTS models, to decompose the proportions of variance of all the components (i.e., trait, state and error) of both Psychological Capital measures, and to compare their magnitude and similarity. In this second study, we also compared their longitudinal predictive power with respect to important organizational outcomes (i.e., work engagement and emotional exhaustion). All in all, results provided empirical evidence for the high degree of convergence of explicit and implicit measures of Psychological Capital. Implications and potential applications of our findings are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"594-611"},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141338134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a long and successful history of personality research in organizational contexts and personality assessments are now widely used in a variety of human resources or talent management interventions. In this tradition, assessment typically involves describing (future) employees' personality profiles, and then using this information to select or adapt work roles to optimally meet employees' traits. Although useful, one limitation of this approach is that it overlooks employees' motivations and abilities to develop themselves in their pursuit of greater person-environment fit. This paper therefore argues for a new type of personality assessment that goes beyond the current descriptive approach. Specifically, we propose assessing employees' Personality Development Goals (PDGs) at work to complement the traditional assessment of “who are you?” with information about “who do you want to be?”. We first briefly summarize the current approach to personality assessment and highlight its limitations. Then, we take stock of the research on PDGs in clinical and personality literatures, and outline the reasons for translating this into organizational applications. We end by describing the key principles that should inform the implementation of PDGs at work and propose a number of future research directions to support and advance this practice.
{"title":"Personality development goals at work: A new frontier in personality assessment in organizations","authors":"Sofie Dupré, Bart Wille","doi":"10.1111/ijsa.12490","DOIUrl":"10.1111/ijsa.12490","url":null,"abstract":"<p>There is a long and successful history of personality research in organizational contexts and personality assessments are now widely used in a variety of human resources or talent management interventions. In this tradition, assessment typically involves describing (future) employees' personality profiles, and then using this information to select or adapt work roles to optimally meet employees' traits. Although useful, one limitation of this approach is that it overlooks employees' motivations and abilities to develop themselves in their pursuit of greater person-environment fit. This paper therefore argues for a new type of personality assessment that goes beyond the current descriptive approach. Specifically, we propose assessing employees' Personality Development Goals (PDGs) at work to complement the traditional assessment of “who are you?” with information about “who do you want to be?”. We first briefly summarize the current approach to personality assessment and highlight its limitations. Then, we take stock of the research on PDGs in clinical and personality literatures, and outline the reasons for translating this into organizational applications. We end by describing the key principles that should inform the implementation of PDGs at work and propose a number of future research directions to support and advance this practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141339393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Bobko, Philip L. Roth, Le Huy, In-Sue Oh, Jesus Salgado
A recent attempt to generate an updated ranking for the operational validity of 25 selection procedures, using a process labeled “conservative estimation” (Sackett et al., 2022), is flawed and misleading. When conservative estimation's treatment of range restriction (RR) is used, it is unclear if reported validity differences among predictors reflect (i) true differences, (ii) differential degrees of RR (different u values), (iii) differential correction for RR (no RR correction vs. RR correction), or (iv) some combination of these factors. We demonstrate that this creates bias and introduces confounds when ranking (or comparing) selection procedures. Second, the list of selection procedures being directly compared includes both predictor methods and predictor constructs, in spite of the substantial effect construct saturation has on validity estimates (e.g., Arthur & Villado, 2008). This causes additional confounds that cloud comparative interpretations. Based on these, and other, concerns we outline an alternative, “considered estimation” strategy when comparing predictors of job performance. Basic tenets include using RR corrections in the same manner for all predictors, parsing validities of selection methods by constructs, applying the logic beyond validities (e.g., ds), thoughtful reconsideration of prior meta-analyses, considering sensitivity analyses, and accounting for nonindependence across studies.
{"title":"The need for “Considered Estimation” versus “Conservative Estimation” when ranking or comparing predictors of job performance","authors":"Philip Bobko, Philip L. Roth, Le Huy, In-Sue Oh, Jesus Salgado","doi":"10.1111/ijsa.12489","DOIUrl":"10.1111/ijsa.12489","url":null,"abstract":"<p>A recent attempt to generate an updated ranking for the operational validity of 25 selection procedures, using a process labeled “conservative estimation” (Sackett et al., 2022), is flawed and misleading. When conservative estimation's treatment of range restriction (RR) is used, it is unclear if reported validity differences among predictors reflect (i) true differences, (ii) differential degrees of RR (different <i>u</i> values), (iii) differential correction for RR (no RR correction vs. RR correction), or (iv) some combination of these factors. We demonstrate that this creates bias and introduces confounds when ranking (or comparing) selection procedures. Second, the list of selection procedures being directly compared includes both predictor methods and predictor constructs, in spite of the substantial effect construct saturation has on validity estimates (e.g., Arthur & Villado, 2008). This causes additional confounds that cloud comparative interpretations. Based on these, and other, concerns we outline an alternative, “considered estimation” strategy when comparing predictors of job performance. Basic tenets include using RR corrections in the same manner for all predictors, parsing validities of selection methods by constructs, applying the logic beyond validities (e.g., <i>d</i>s), thoughtful reconsideration of prior meta-analyses, considering sensitivity analyses, and accounting for nonindependence across studies.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141342559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}