You are what you click: using machine learning to model trace data for psychometric measurement

IF 1 Q2 SOCIAL SCIENCES, INTERDISCIPLINARY International Journal of Testing Pub Date : 2022-10-02 DOI:10.1080/15305058.2022.2134394
R. Landers, Elena M. Auer, Gabriel Mersy, Sebastian Marin, Jason Blaik
{"title":"You are what you click: using machine learning to model trace data for psychometric measurement","authors":"R. Landers, Elena M. Auer, Gabriel Mersy, Sebastian Marin, Jason Blaik","doi":"10.1080/15305058.2022.2134394","DOIUrl":null,"url":null,"abstract":"Abstract Assessment trace data, such as mouse positions and their timing, offer interesting and provocative reflections of individual differences yet are currently underutilized by testing professionals. In this article, we present a 10-step procedure to maximize the probability that a trace data modeling project will be successful: 1) grounding the project in psychometric theory, 2) building technical infrastructure to collect trace data, 3) designing a useful developmental validation study, 4) using a holdout validation approach with collected data, 5) using exploratory analysis to conduct meaningful feature engineering, 6) identifying useful machine learning algorithms to predict a thoughtfully chosen criterion, 7) engineering a machine learning model with meaningful internal cross-validation and hyperparameter selection, 8) conducting model diagnostics to assess if the resulting model is overfitted, underfitted, or within acceptable tolerance, and 9) testing the success of the final model in meeting conceptual, technical, and psychometric goals. If deemed successful, trace data model predictions could then be engineered into decision-making systems. We present this framework within the broader view of psychometrics, exploring the challenges of developing psychometrically valid models using such complex data with much weaker trait signals than assessment developers have typically attempted to model.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15305058.2022.2134394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Assessment trace data, such as mouse positions and their timing, offer interesting and provocative reflections of individual differences yet are currently underutilized by testing professionals. In this article, we present a 10-step procedure to maximize the probability that a trace data modeling project will be successful: 1) grounding the project in psychometric theory, 2) building technical infrastructure to collect trace data, 3) designing a useful developmental validation study, 4) using a holdout validation approach with collected data, 5) using exploratory analysis to conduct meaningful feature engineering, 6) identifying useful machine learning algorithms to predict a thoughtfully chosen criterion, 7) engineering a machine learning model with meaningful internal cross-validation and hyperparameter selection, 8) conducting model diagnostics to assess if the resulting model is overfitted, underfitted, or within acceptable tolerance, and 9) testing the success of the final model in meeting conceptual, technical, and psychometric goals. If deemed successful, trace data model predictions could then be engineered into decision-making systems. We present this framework within the broader view of psychometrics, exploring the challenges of developing psychometrically valid models using such complex data with much weaker trait signals than assessment developers have typically attempted to model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
点击即是:使用机器学习对心理测量的跟踪数据进行建模
摘要评估跟踪数据,如鼠标位置及其时间,提供了对个体差异的有趣和挑衅性的反映,但目前测试专业人员尚未充分利用。在这篇文章中,我们提出了一个10步程序,以最大限度地提高追踪数据建模项目成功的概率:1)将项目建立在心理测量理论的基础上,2)建立收集追踪数据的技术基础设施,3)设计一个有用的发展验证研究,4)对收集的数据使用坚持验证方法,5)使用探索性分析进行有意义的特征工程,6)识别有用的机器学习算法来预测深思熟虑选择的标准,7)设计具有有意义的内部交叉验证和超参数选择的机器学习模型,8)进行模型诊断以评估所得到的模型是否过拟合、不足,或在可接受的容忍度内,以及9)测试最终模型在满足概念、技术和心理测量目标方面的成功。如果被认为是成功的,跟踪数据模型预测可以被设计成决策系统。我们在心理测量学的更广泛视野中提出了这一框架,探讨了使用这种复杂数据开发心理测量学有效模型的挑战,这些数据的特征信号比评估开发人员通常试图建模的要弱得多。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Testing
International Journal of Testing SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
3.60
自引率
11.80%
发文量
13
期刊最新文献
Combining Mokken Scale Analysis with and rasch measurement theory to explore differences in measurement quality between subgroups Examining the construct validity of the MIDUS version of the Multidimensional Personality Questionnaire (MPQ) Where nonresponse is at its loudest: Cross-country and individual differences in item nonresponse across the PISA 2018 student questionnaire The choice between cognitive diagnosis and item response theory: A case study from medical education Beyond group comparisons: Accounting for intersectional sources of bias in international survey measures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1