How Scoring Approaches Impact Estimates of Growth in the Presence of Survey Item Ceiling Effects

IF 1 4区 心理学 Q4 PSYCHOLOGY, MATHEMATICAL Applied Psychological Measurement Pub Date : 2024-03-16 DOI:10.1177/01466216241238749
Kelly D. Edwards, J. Soland
{"title":"How Scoring Approaches Impact Estimates of Growth in the Presence of Survey Item Ceiling Effects","authors":"Kelly D. Edwards, J. Soland","doi":"10.1177/01466216241238749","DOIUrl":null,"url":null,"abstract":"Survey scores are often the basis for understanding how individuals grow psychologically and socio-emotionally. A known problem with many surveys is that the items are all “easy”—that is, individuals tend to use only the top one or two response categories on the Likert scale. Such an issue could be especially problematic, and lead to ceiling effects, when the same survey is administered repeatedly over time. In this study, we conduct simulation and empirical studies to (a) quantify the impact of these ceiling effects on growth estimates when using typical scoring approaches like sum scores and unidimensional item response theory (IRT) models and (b) examine whether approaches to survey design and scoring, including employing various longitudinal multidimensional IRT (MIRT) models, can mitigate any bias in growth estimates. We show that bias is substantial when using typical scoring approaches and that, while lengthening the survey helps somewhat, using a longitudinal MIRT model with plausible values scoring all but alleviates the issue. Results have implications for scoring surveys in growth studies going forward, as well as understanding how Likert item ceiling effects may be contributing to replication failures.","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Psychological Measurement","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/01466216241238749","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Survey scores are often the basis for understanding how individuals grow psychologically and socio-emotionally. A known problem with many surveys is that the items are all “easy”—that is, individuals tend to use only the top one or two response categories on the Likert scale. Such an issue could be especially problematic, and lead to ceiling effects, when the same survey is administered repeatedly over time. In this study, we conduct simulation and empirical studies to (a) quantify the impact of these ceiling effects on growth estimates when using typical scoring approaches like sum scores and unidimensional item response theory (IRT) models and (b) examine whether approaches to survey design and scoring, including employing various longitudinal multidimensional IRT (MIRT) models, can mitigate any bias in growth estimates. We show that bias is substantial when using typical scoring approaches and that, while lengthening the survey helps somewhat, using a longitudinal MIRT model with plausible values scoring all but alleviates the issue. Results have implications for scoring surveys in growth studies going forward, as well as understanding how Likert item ceiling effects may be contributing to replication failures.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在存在调查项目上限效应的情况下,计分方法如何影响增长估计值
调查得分往往是了解个人心理和社会情感成长的基础。众所周知,许多调查存在一个问题,那就是调查项目都很 "简单"--也就是说,个人倾向于只使用李克特量表中最高的一到两个回答类别。当同一调查在一段时间内重复进行时,这个问题可能会特别严重,并导致天花板效应。在本研究中,我们进行了模拟和实证研究,以便:(a) 量化这些上限效应在使用典型计分方法(如总分和单维项目反应理论 (IRT) 模型)时对增长估计值的影响;(b) 检验调查设计和计分方法(包括采用各种纵向多维 IRT (MIRT) 模型)是否能减轻增长估计值的偏差。我们的研究表明,采用典型的计分方法会产生很大的偏差,虽然延长调查时间会有一定的帮助,但采用具有可信值计分的纵向 MIRT 模型几乎可以缓解这一问题。研究结果对今后成长研究中的调查评分以及了解李克特项目上限效应可能如何导致复制失败具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.30
自引率
8.30%
发文量
50
期刊介绍: Applied Psychological Measurement publishes empirical research on the application of techniques of psychological measurement to substantive problems in all areas of psychology and related disciplines.
期刊最新文献
Item Response Modeling of Clinical Instruments With Filter Questions: Disentangling Symptom Presence and Severity. A Note on Standard Errors for Multidimensional Two-Parameter Logistic Models Using Gaussian Variational Estimation Measurement Invariance Testing Works Accommodating and Extending Various Models for Special Effects Within the Generalized Partially Confirmatory Factor Analysis Framework Investigating Directional Invariance in an Item Response Tree Model for Extreme Response Style and Trait-Based Unfolding Responses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1