首页 > 最新文献

International Journal of Selection and Assessment最新文献

英文 中文
Investigating Effects of Providing Information and Professional Experience on Production of Stories in Response to Past-Behavior Questions
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-03-04 DOI: 10.1111/ijsa.70004
Marie-Eve Tescari, Adrian Bangerter, Christina Györkös, Charlène Padoan, Sandrine Fasel, Lucile Nicolier, Laurène Hondius, Karen Ohnmacht

Past-behavior questions invite applicants to describe their behavior in a past work-related situation, that is, to tell a story about that situation. However, applicants often fail to produce stories in response to such questions. In two experiments (n = 91 and n = 102), we investigated the effects of providing information about questions and professional experience (2 × 2 between-subjects design) on the production of stories and interview performance. In Experiment 1, providing information and professional experience did not affect story production, but professional experience increased performance. In Experiment 2, we enhanced the manipulation of information, giving more explicit guidance about expected responses and increasing the contrast in professional experience. Experienced participants received better performance ratings than inexperienced ones. Neither providing information nor professional experience affected the production of stories, but both affected performance. Story narrative quality was coded post hoc in both studies. Providing information and professional experience did not affect narrative quality in Experiment 1 but did in Experiment 2. Results add to our understanding of individual differences affecting responses to past-behavior questions and have practical implications for facilitating appropriate responses.

{"title":"Investigating Effects of Providing Information and Professional Experience on Production of Stories in Response to Past-Behavior Questions","authors":"Marie-Eve Tescari,&nbsp;Adrian Bangerter,&nbsp;Christina Györkös,&nbsp;Charlène Padoan,&nbsp;Sandrine Fasel,&nbsp;Lucile Nicolier,&nbsp;Laurène Hondius,&nbsp;Karen Ohnmacht","doi":"10.1111/ijsa.70004","DOIUrl":"https://doi.org/10.1111/ijsa.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Past-behavior questions invite applicants to describe their behavior in a past work-related situation, that is, to tell a story about that situation. However, applicants often fail to produce stories in response to such questions. In two experiments (<i>n</i> = 91 and <i>n</i> = 102), we investigated the effects of providing information about questions and professional experience (2 × 2 between-subjects design) on the production of stories and interview performance. In Experiment 1, providing information and professional experience did not affect story production, but professional experience increased performance. In Experiment 2, we enhanced the manipulation of information, giving more explicit guidance about expected responses and increasing the contrast in professional experience. Experienced participants received better performance ratings than inexperienced ones. Neither providing information nor professional experience affected the production of stories, but both affected performance. Story narrative quality was coded post hoc in both studies. Providing information and professional experience did not affect narrative quality in Experiment 1 but did in Experiment 2. Results add to our understanding of individual differences affecting responses to past-behavior questions and have practical implications for facilitating appropriate responses.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All Your Base Are Belong to Us: The Urgent Reality of Unproctored Testing in the Age of LLMs
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-03-04 DOI: 10.1111/ijsa.70005
Louis Hickman

The release of new generative artificial intelligence (AI) tools, including new large language models (LLMs), continues at a rapid pace. Upon the release of OpenAI's new o1 models, I reconducted Hickman et al.'s (2024) analyses examining how well LLMs perform on a quantitative ability (number series) test. GPT-4 scored below the 20th percentile (compared to thousands of human test takers), but o1 scored at the 95th percentile. In response to these updated findings and Lievens and Dunlop's (2025) article about the effects of LLMs on the validity of pre-employment assessments, I make an urgent call to action for selection and assessment researchers and practitioners. A recent survey suggests that a large proportion of applicants are already using generative AI tools to complete high-stakes assessments, and it seems that no current assessments will be safe for long. Thus, I offer possibilities for the future of testing, detail their benefits and drawbacks, and provide recommendations. These possibilities are: increased use of proctoring, adding strict time limits, using LLM detection software, using think-aloud (or similar) protocols, collecting and analyzing trace data, emphasizing samples over signs, and redesigning assessments to allow LLM use during completion. Several of these possibilities inspire future research to modernize assessment. Future research should seek to improve our understanding of how to design valid assessments that allow LLM use, how to effectively use trace test-taker data, and whether think-aloud protocols can help differentiate experts and novices.

{"title":"All Your Base Are Belong to Us: The Urgent Reality of Unproctored Testing in the Age of LLMs","authors":"Louis Hickman","doi":"10.1111/ijsa.70005","DOIUrl":"https://doi.org/10.1111/ijsa.70005","url":null,"abstract":"<p>The release of new generative artificial intelligence (AI) tools, including new large language models (LLMs), continues at a rapid pace. Upon the release of OpenAI's new o1 models, I reconducted Hickman et al.'s (2024) analyses examining how well LLMs perform on a quantitative ability (number series) test. GPT-4 scored below the 20th percentile (compared to thousands of human test takers), but o1 scored at the 95th percentile. In response to these updated findings and Lievens and Dunlop's (2025) article about the effects of LLMs on the validity of pre-employment assessments, I make an urgent call to action for selection and assessment researchers and practitioners. A recent survey suggests that a large proportion of applicants are already using generative AI tools to complete high-stakes assessments, and it seems that no current assessments will be safe for long. Thus, I offer possibilities for the future of testing, detail their benefits and drawbacks, and provide recommendations. These possibilities are: increased use of proctoring, adding strict time limits, using LLM detection software, using think-aloud (or similar) protocols, collecting and analyzing trace data, emphasizing samples over signs, and redesigning assessments to allow LLM use during completion. Several of these possibilities inspire future research to modernize assessment. Future research should seek to improve our understanding of how to design valid assessments that allow LLM use, how to effectively use trace test-taker data, and whether think-aloud protocols can help differentiate experts and novices.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.70005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Desirability Tendency in Personality-Based Job Interviews—A Question of Interview Format?
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-03-04 DOI: 10.1111/ijsa.70006
Valerie Schröder, Anna Luca Heimann, Pia Ingold, Nicolas Roulin, Marianne Schmid Mast, Manuel Bachmann, Martin Kleinmann

Today's variety of interview formats raises the question of their interchangeability. For personality interviews, a crucial question is whether different formats are comparably robust against applicants' social desirability tendency (SDT) to ensure an accurate measurement. Using a within-subjects design in a simulated selection setting with 211 participants, this study examined how SDT affects personality scores in a face-to-face, asynchronous video, and written interview—all with similar interview questions designed to measure personality. Relationships between interview scores and SDT were weakest in the face-to-face format and strongest in the written format and differed depending on which personality trait was assessed. The findings highlight the suitedness of different interview formats for measuring personality with important implications for interview design and personality assessment.

{"title":"Social Desirability Tendency in Personality-Based Job Interviews—A Question of Interview Format?","authors":"Valerie Schröder,&nbsp;Anna Luca Heimann,&nbsp;Pia Ingold,&nbsp;Nicolas Roulin,&nbsp;Marianne Schmid Mast,&nbsp;Manuel Bachmann,&nbsp;Martin Kleinmann","doi":"10.1111/ijsa.70006","DOIUrl":"https://doi.org/10.1111/ijsa.70006","url":null,"abstract":"<div>\u0000 \u0000 <p>Today's variety of interview formats raises the question of their interchangeability. For personality interviews, a crucial question is whether different formats are comparably robust against applicants' social desirability tendency (SDT) to ensure an accurate measurement. Using a within-subjects design in a simulated selection setting with 211 participants, this study examined how SDT affects personality scores in a face-to-face, asynchronous video, and written interview—all with similar interview questions designed to measure personality. Relationships between interview scores and SDT were weakest in the face-to-face format and strongest in the written format and differed depending on which personality trait was assessed. The findings highlight the suitedness of different interview formats for measuring personality with important implications for interview design and personality assessment.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attitudes Toward Cybervetting in Germany: Impact on Organizational Attractiveness Depends on Social Media Platform
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-02-17 DOI: 10.1111/ijsa.70003
Philipp Schäpers, Franz W. Mönke, Chiara-Maria Frieler, Nicolas Roulin, Johannes Basch

Cybervetting, assessing social media in personnel selection, is widely used. However, individuals concerned often perceive this practice negatively. We propose that attitudes toward cybervetting may depend on the platform used and the cultural context. Thus, we transfer the attitudes toward cybervetting scale to a context with strict data regulations: Germany. In an online between-subjects experiment with platform users and non-users (N = 100 working professionals and students), we examined attitudes toward cybervetting on different social media platforms (professional: LinkedIn vs. personal: Instagram) and their relationship with organizational attractiveness. We found that German participants viewed cybervetting on professional platforms with more skepticism than American participants. Hierarchical regression analysis revealed higher perceived fairness, lower invasion of privacy, and higher organizational attractiveness when cybervetting was done on professional platforms.

{"title":"Attitudes Toward Cybervetting in Germany: Impact on Organizational Attractiveness Depends on Social Media Platform","authors":"Philipp Schäpers,&nbsp;Franz W. Mönke,&nbsp;Chiara-Maria Frieler,&nbsp;Nicolas Roulin,&nbsp;Johannes Basch","doi":"10.1111/ijsa.70003","DOIUrl":"https://doi.org/10.1111/ijsa.70003","url":null,"abstract":"<p>Cybervetting, assessing social media in personnel selection, is widely used. However, individuals concerned often perceive this practice negatively. We propose that attitudes toward cybervetting may depend on the platform used and the cultural context. Thus, we transfer the attitudes toward cybervetting scale to a context with strict data regulations: Germany. In an online between-subjects experiment with platform users and non-users (<i>N </i>= 100 working professionals and students), we examined attitudes toward cybervetting on different social media platforms (professional: LinkedIn vs. personal: Instagram) and their relationship with organizational attractiveness. We found that German participants viewed cybervetting on professional platforms with more skepticism than American participants. Hierarchical regression analysis revealed higher perceived fairness, lower invasion of privacy, and higher organizational attractiveness when cybervetting was done on professional platforms.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.70003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why Participant Perceptions of Assessment Center Exercises Matter: Justice, Motivation, Self-Efficacy, and Performance
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-02-04 DOI: 10.1111/ijsa.70002
Sylvia G. Roch, Kathryn Devon

Despite expectations, assessment center (AC) participants' performance ratings often are not strongly correlated over AC exercises. Why is a puzzle? Perhaps one piece of the puzzle is that participants view AC exercises with varying levels of motivation, justice, and self-efficacy, which relate to exercise performance, the topic of the current research. Based on 123 participants completing an AC consisting of six exercises (two leaderless group discussions, oral presentation, written case analysis, personality assessment, and cognitive ability exercise), results showed that motivation, self-efficacy, and procedural justice levels differed among exercises, which generally related to exercise performance. Two interventions designed to improve how participants perceive AC exercises (one focusing on self-efficacy and the other on justice) were not successful. Implications are discussed.

{"title":"Why Participant Perceptions of Assessment Center Exercises Matter: Justice, Motivation, Self-Efficacy, and Performance","authors":"Sylvia G. Roch,&nbsp;Kathryn Devon","doi":"10.1111/ijsa.70002","DOIUrl":"https://doi.org/10.1111/ijsa.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>Despite expectations, assessment center (AC) participants' performance ratings often are not strongly correlated over AC exercises. Why is a puzzle? Perhaps one piece of the puzzle is that participants view AC exercises with varying levels of motivation, justice, and self-efficacy, which relate to exercise performance, the topic of the current research. Based on 123 participants completing an AC consisting of six exercises (two leaderless group discussions, oral presentation, written case analysis, personality assessment, and cognitive ability exercise), results showed that motivation, self-efficacy, and procedural justice levels differed among exercises, which generally related to exercise performance. Two interventions designed to improve how participants perceive AC exercises (one focusing on self-efficacy and the other on justice) were not successful. Implications are discussed.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143111799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are Games Always Fun and Fair? A Comparison of Reactions to Different Game-Based Assessments
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-01-27 DOI: 10.1111/ijsa.12520
Marie Luise Ohlms, Klaus G. Melchers

Game-based assessment (GBA) has garnered attention in the personnel selection and assessment context owing to its postulated potential to improve applicant reactions. However, GBAs can differ considerably depending on their specific design. Therefore, we sought to determine whether test taker reactions to GBAs vary owing to the different manifestations that GBAs may take on, and to test takers' individual preferences for such assessments. In an experimental study, each of N = 147 participants was shown six different GBAs and asked to rate several applicant reaction variables concerning these assessments. We found that reactions to GBAs were not inherently positive even though GBAs were generally perceived as enjoyable. However, perceptions of fairness and organizational attractiveness varied considerably between GBAs. Participants' age and experience with video games were related to reactions but had less impact than the different GBAs. Our results suggest that a technology-as-designed approach, which considers GBAs as a combination of multiple components (e.g., game elements), is crucial in GBA research to provide generalizable results for theory and practice.

{"title":"Are Games Always Fun and Fair? A Comparison of Reactions to Different Game-Based Assessments","authors":"Marie Luise Ohlms,&nbsp;Klaus G. Melchers","doi":"10.1111/ijsa.12520","DOIUrl":"https://doi.org/10.1111/ijsa.12520","url":null,"abstract":"<p>Game-based assessment (GBA) has garnered attention in the personnel selection and assessment context owing to its postulated potential to improve applicant reactions. However, GBAs can differ considerably depending on their specific design. Therefore, we sought to determine whether test taker reactions to GBAs vary owing to the different manifestations that GBAs may take on, and to test takers' individual preferences for such assessments. In an experimental study, each of <i>N</i> = 147 participants was shown six different GBAs and asked to rate several applicant reaction variables concerning these assessments. We found that reactions to GBAs were not inherently positive even though GBAs were generally perceived as enjoyable. However, perceptions of fairness and organizational attractiveness varied considerably between GBAs. Participants' age and experience with video games were related to reactions but had less impact than the different GBAs. Our results suggest that a technology-as-designed approach, which considers GBAs as a combination of multiple components (e.g., game elements), is crucial in GBA research to provide generalizable results for theory and practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12520","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Proctored and Unproctored Cognitive Ability Testing in High-Stakes Personnel Selection
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-01-27 DOI: 10.1111/ijsa.70001
Tore Nøttestad Norrøne, Morten Nordmo

New advances in computerized adaptive testing (CAT) have increased the feasibility of high-stakes unproctored testing of general mental ability (GMA) in personnel selection contexts. This study presents the results from a within-subject investigation of the convergent validity of unproctored tests. Three batteries of cognitive ability tests were administered during personnel selection in the Norwegian Armed Forces. A total of 537 candidates completed two sets of proctored fixed-length GMA tests before and during the selection process. In addition, an at-home unproctored CAT battery of tests was administered before the selection process began. Differences and similarities between the convergent validity of the tests were evaluated. The convergent validity coefficients did not significantly differ between proctored and unproctored batteries, both on observed GMA scores and the latent factor level. The distribution and standardized residuals of test scores comparing proctored-proctored and proctored-unproctored were overall quite similar and showed no evidence of score inflation or deflation in the unproctored tests. The similarities between proctored and unproctored results also extended to the moderately searchable words similarity test. Although some unlikely individual cases were observed, the overall results suggest that the unproctored tests maintained their convergent validity.

{"title":"Comparing Proctored and Unproctored Cognitive Ability Testing in High-Stakes Personnel Selection","authors":"Tore Nøttestad Norrøne,&nbsp;Morten Nordmo","doi":"10.1111/ijsa.70001","DOIUrl":"https://doi.org/10.1111/ijsa.70001","url":null,"abstract":"<div>\u0000 \u0000 <p>New advances in computerized adaptive testing (CAT) have increased the feasibility of high-stakes unproctored testing of general mental ability (GMA) in personnel selection contexts. This study presents the results from a within-subject investigation of the convergent validity of unproctored tests. Three batteries of cognitive ability tests were administered during personnel selection in the Norwegian Armed Forces. A total of 537 candidates completed two sets of proctored fixed-length GMA tests before and during the selection process. In addition, an at-home unproctored CAT battery of tests was administered before the selection process began. Differences and similarities between the convergent validity of the tests were evaluated. The convergent validity coefficients did not significantly differ between proctored and unproctored batteries, both on observed GMA scores and the latent factor level. The distribution and standardized residuals of test scores comparing proctored-proctored and proctored-unproctored were overall quite similar and showed no evidence of score inflation or deflation in the unproctored tests. The similarities between proctored and unproctored results also extended to the moderately searchable words similarity test. Although some unlikely individual cases were observed, the overall results suggest that the unproctored tests maintained their convergent validity.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Meta-Analysis of Accent Bias in Employee Interviews: The Effects of Gender and Accent Stereotypes, Interview Modality, and Other Moderating Features
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-01-23 DOI: 10.1111/ijsa.12519
Henri T. Maindidze, Jason G. Randall, Michelle P. Martin-Raugh, Katrisha M. Smith

To address concerns of subtle discrimination against stigmatized groups, we meta-analyze the magnitude and moderators of bias against non-standard accents in employment interview evaluations. Results from a multi-level random-effects meta-analysis (unique effects: k = 41, N = 7,596; multi-level effects accounting for dependencies: k = 120, N = 20,873) demonstrate that standard-accented (SA) interviewees are consistently favored over non-standard-accented (NSA) interviewees (d = 0.46). Accent bias is stronger against women compared to men, particularly when evaluator samples are predominantly female, and was strongly predicted by interviewers' stereotypes of NSA interviewees as less competent and, to a lesser extent, as less warm. Accent bias was not significantly impacted by perceptions of comprehensibility, accentedness, accent type, interview modality, study rigor, or job speaking skill requirements.

{"title":"A Meta-Analysis of Accent Bias in Employee Interviews: The Effects of Gender and Accent Stereotypes, Interview Modality, and Other Moderating Features","authors":"Henri T. Maindidze,&nbsp;Jason G. Randall,&nbsp;Michelle P. Martin-Raugh,&nbsp;Katrisha M. Smith","doi":"10.1111/ijsa.12519","DOIUrl":"https://doi.org/10.1111/ijsa.12519","url":null,"abstract":"<p>To address concerns of subtle discrimination against stigmatized groups, we meta-analyze the magnitude and moderators of bias against non-standard accents in employment interview evaluations. Results from a multi-level random-effects meta-analysis (unique effects: <i>k</i> = 41, <i>N</i> = 7,596; multi-level effects accounting for dependencies: <i>k</i> = 120, <i>N</i> = 20,873) demonstrate that standard-accented (SA) interviewees are consistently favored over non-standard-accented (NSA) interviewees (<i>d</i> = 0.46). Accent bias is stronger against women compared to men, particularly when evaluator samples are predominantly female, and was strongly predicted by interviewers' stereotypes of NSA interviewees as less competent and, to a lesser extent, as less warm. Accent bias was not significantly impacted by perceptions of comprehensibility, accentedness, accent type, interview modality, study rigor, or job speaking skill requirements.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12519","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Theory-Based Volitional Personality Development Interventions at Work
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-01-19 DOI: 10.1111/ijsa.70000
Sofie Dupré, Bart Wille

In this article, we respond to four commentaries (Li et al., 2024; Hennecke & Ingold, 2025; Perossa & Connelly, 2024; Ones et al., 2024) on our article “Personality development goals at work: A new frontier in personality assessment in organizations.” We start by addressing four overarching considerations from the commentaries, including (a) how to approach PDG assessment, (b) the feasibility of personality development interventions, (c) potential trade-offs involved, and (d) the value of personality development beyond established HR practices. Next, in an attempt to integrate these considerations and stimulate future research in this area, we outline three critical elements of what we believe can be the foundation of theory-based personality development interventions at work. For this purpose, we first posit that personality development at work can be rethought such that the focus shifts from “changing an employee's trait levels” to “expanding that employee's comfort zone across a range of personality states.” Second, to have sustained effects, interventions need to accomplish more than simply “learning new behaviors,” by effectively targeting all layers of personality—behavioral, cognitive, and emotional. Finally, we introduce optimal functioning, encompassing both performance and well-being aspects, as the ultimate criterion for evaluating the success of personality development interventions. We hope these reactions and integrative ideas will inspire future research on personality development goals assessment and personality development interventions in the work context.

{"title":"Toward Theory-Based Volitional Personality Development Interventions at Work","authors":"Sofie Dupré,&nbsp;Bart Wille","doi":"10.1111/ijsa.70000","DOIUrl":"https://doi.org/10.1111/ijsa.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>In this article, we respond to four commentaries (Li et al., 2024; Hennecke &amp; Ingold, 2025; Perossa &amp; Connelly, 2024; Ones et al., 2024) on our article “Personality development goals at work: A new frontier in personality assessment in organizations.” We start by addressing four overarching considerations from the commentaries, including (a) how to approach PDG assessment, (b) the feasibility of personality development interventions, (c) potential trade-offs involved, and (d) the value of personality development beyond established HR practices. Next, in an attempt to integrate these considerations and stimulate future research in this area, we outline three critical elements of what we believe can be the foundation of theory-based personality development interventions at work. For this purpose, we first posit that personality development at work can be rethought such that the focus shifts from “changing an employee's trait levels” to “expanding that employee's comfort zone across a range of personality states.” Second, to have sustained effects, interventions need to accomplish more than simply “learning new behaviors,” by effectively targeting all layers of personality—behavioral, cognitive, and emotional. Finally, we introduce optimal functioning, encompassing both performance and well-being aspects, as the ultimate criterion for evaluating the success of personality development interventions. We hope these reactions and integrative ideas will inspire future research on personality development goals assessment and personality development interventions in the work context.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143116278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Faking on the Criterion-Related Validity of Personality Assessments
IF 2.6 4区 管理学 Q3 MANAGEMENT Pub Date : 2025-01-06 DOI: 10.1111/ijsa.12518
Andrew B. Speer, Angie Y. Delacruz, Takudzwa Chawota, Lauren J. Wegmeyer, Andrew P. Tenbrink, Carter Gibson, Chris Frost

Personality assessments are commonly used in hiring, but concerns about faking have raised doubts about their effectiveness. Qualitative reviews show mixed and inconsistent impacts of faking on criterion-related validity. To address this, a series of meta-analyses were conducted using matched samples of honest and motivated respondents (i.e., instructed to fake, applicants). In 80 paired samples, the average difference in validity coefficients between honest and motivated samples across five-factor model traits ranged from 0.05 to 0.08 (largest for conscientiousness and emotional stability), with the validity ratio ranging from 64% to 72%. Validity was attenuated when candidates faked regardless of sample type, trait relevance, or the importance of impression management, though variation existed across criterion types. Both real applicant samples (k = 25) and instructed response conditions (k = 55) showed a reduction in validity across honest and motivated conditions, including when managerial ratings of job performance were the criterion. Thus, faking impacted the validity in operational samples. This suggests that practitioners should be cautious relying upon concurrent validation evidence (for personality inventories) and expect attenuated validity in operational applicant settings, particularly for conscientiousness and emotional stability scales. That said, it is important to highlight that personality assessments generally maintained useful validity even under-motivated conditions.

{"title":"Evaluating the Impact of Faking on the Criterion-Related Validity of Personality Assessments","authors":"Andrew B. Speer,&nbsp;Angie Y. Delacruz,&nbsp;Takudzwa Chawota,&nbsp;Lauren J. Wegmeyer,&nbsp;Andrew P. Tenbrink,&nbsp;Carter Gibson,&nbsp;Chris Frost","doi":"10.1111/ijsa.12518","DOIUrl":"https://doi.org/10.1111/ijsa.12518","url":null,"abstract":"<p>Personality assessments are commonly used in hiring, but concerns about faking have raised doubts about their effectiveness. Qualitative reviews show mixed and inconsistent impacts of faking on criterion-related validity. To address this, a series of meta-analyses were conducted using matched samples of honest and motivated respondents (i.e., instructed to fake, applicants). In 80 paired samples, the average difference in validity coefficients between honest and motivated samples across five-factor model traits ranged from 0.05 to 0.08 (largest for conscientiousness and emotional stability), with the validity ratio ranging from 64% to 72%. Validity was attenuated when candidates faked regardless of sample type, trait relevance, or the importance of impression management, though variation existed across criterion types. Both real applicant samples (<i>k</i> = 25) and instructed response conditions (<i>k</i> = 55) showed a reduction in validity across honest and motivated conditions, including when managerial ratings of job performance were the criterion. Thus, faking impacted the validity in operational samples. This suggests that practitioners should be cautious relying upon concurrent validation evidence (for personality inventories) and expect attenuated validity in operational applicant settings, particularly for conscientiousness and emotional stability scales. That said, it is important to highlight that personality assessments generally maintained useful validity even under-motivated conditions.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143112513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Selection and Assessment
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1