The Effects of the Training Sample Size, Ground Truth Reliability, and NLP Method on Language-Based Automatic Interview Scores’ Psychometric Properties

IF 8.9 2区 管理学 Q1 MANAGEMENT Organizational Research Methods Pub Date : 2024-07-25 DOI:10.1177/10944281241264027
Louis Hickman, Josh Liff, Caleb Rottman, Charles Calderwood
{"title":"The Effects of the Training Sample Size, Ground Truth Reliability, and NLP Method on Language-Based Automatic Interview Scores’ Psychometric Properties","authors":"Louis Hickman, Josh Liff, Caleb Rottman, Charles Calderwood","doi":"10.1177/10944281241264027","DOIUrl":null,"url":null,"abstract":"While machine learning (ML) can validly score psychological constructs from behavior, several conditions often change across studies, making it difficult to understand why the psychometric properties of ML models differ across studies. We address this gap in the context of automatically scored interviews. Across multiple datasets, for interview- or question-level scoring of self-reported, tested, and interviewer-rated constructs, we manipulate the training sample size and natural language processing (NLP) method while observing differences in ground truth reliability. We examine how these factors influence the ML model scores’ test–retest reliability and convergence, and we develop multilevel models for estimating the convergent-related validity of ML model scores in similar interviews. When the ground truth is interviewer ratings, hundreds of observations are adequate for research purposes, while larger samples are recommended for practitioners to support generalizability across populations and time. However, self-reports and tested constructs require larger training samples. Particularly when the ground truth is interviewer ratings, NLP embedding methods improve upon count-based methods. Given mixed findings regarding ground truth reliability, we discuss future research possibilities on factors that affect supervised ML models’ psychometric properties.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":null,"pages":null},"PeriodicalIF":8.9000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Organizational Research Methods","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1177/10944281241264027","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0

Abstract

While machine learning (ML) can validly score psychological constructs from behavior, several conditions often change across studies, making it difficult to understand why the psychometric properties of ML models differ across studies. We address this gap in the context of automatically scored interviews. Across multiple datasets, for interview- or question-level scoring of self-reported, tested, and interviewer-rated constructs, we manipulate the training sample size and natural language processing (NLP) method while observing differences in ground truth reliability. We examine how these factors influence the ML model scores’ test–retest reliability and convergence, and we develop multilevel models for estimating the convergent-related validity of ML model scores in similar interviews. When the ground truth is interviewer ratings, hundreds of observations are adequate for research purposes, while larger samples are recommended for practitioners to support generalizability across populations and time. However, self-reports and tested constructs require larger training samples. Particularly when the ground truth is interviewer ratings, NLP embedding methods improve upon count-based methods. Given mixed findings regarding ground truth reliability, we discuss future research possibilities on factors that affect supervised ML models’ psychometric properties.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
训练样本规模、地面实况可靠性和 NLP 方法对基于语言的自动访谈评分心理测量特性的影响
虽然机器学习(ML)可以有效地从行为中对心理结构进行评分,但在不同的研究中,有几个条件经常会发生变化,因此很难理解为什么不同研究中的 ML 模型的心理测量特性会有所不同。我们在自动评分访谈中解决了这一空白。在多个数据集中,对于自我报告、测试和面试官评分的访谈或问题级评分,我们操纵了训练样本大小和自然语言处理(NLP)方法,同时观察了基本真实可靠性的差异。我们研究了这些因素如何影响 ML 模型得分的重测可靠性和收敛性,并开发了多层次模型来估计类似访谈中 ML 模型得分的收敛性相关有效性。当基本事实是访谈者的评分时,数百个观察样本就足以满足研究目的,而对于从业人员来说,则建议使用更大的样本,以支持跨人群和跨时间的普适性。然而,自我报告和经过测试的结构需要更大的训练样本。特别是当基本真实情况是访谈者的评分时,NLP 嵌入方法比基于计数的方法更有优势。鉴于有关基本真实可靠性的研究结果好坏参半,我们讨论了未来研究影响有监督 ML 模型心理计量特性的因素的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
23.20
自引率
3.20%
发文量
17
期刊介绍: Organizational Research Methods (ORM) was founded with the aim of introducing pertinent methodological advancements to researchers in organizational sciences. The objective of ORM is to promote the application of current and emerging methodologies to advance both theory and research practices. Articles are expected to be comprehensible to readers with a background consistent with the methodological and statistical training provided in contemporary organizational sciences doctoral programs. The text should be presented in a manner that facilitates accessibility. For instance, highly technical content should be placed in appendices, and authors are encouraged to include example data and computer code when relevant. Additionally, authors should explicitly outline how their contribution has the potential to advance organizational theory and research practice.
期刊最新文献
Taking It Easy: Off-the-Shelf Versus Fine-Tuned Supervised Modeling of Performance Appraisal Text Hello World! Building Computational Models to Represent Social and Organizational Theory The Effects of the Training Sample Size, Ground Truth Reliability, and NLP Method on Language-Based Automatic Interview Scores’ Psychometric Properties Enhancing Causal Pursuits in Organizational Science: Targeting the Effect of Treatment on the Treated in Research on Vulnerable Populations Analyzing Social Interaction in Organizations: A Roadmap for Reflexive Choice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1