基于策略捕获技术的测试-复验可靠性综合评判元分析

IF 8.9 2区 管理学 Q1 MANAGEMENT Organizational Research Methods Pub Date : 2021-05-12 DOI:10.1177/10944281211011529
Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe
{"title":"基于策略捕获技术的测试-复验可靠性综合评判元分析","authors":"Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe","doi":"10.1177/10944281211011529","DOIUrl":null,"url":null,"abstract":"Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":"541 - 574"},"PeriodicalIF":8.9000,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10944281211011529","citationCount":"2","resultStr":"{\"title\":\"A Test-Retest Reliability Generalization Meta-Analysis of Judgments Via the Policy-Capturing Technique\",\"authors\":\"Ze Zhu, Alan J. Tomassetti, R. Dalal, Shannon W. Schrader, Kevin Loo, Isaac E. Sabat, Balca Alaybek, You Zhou, Chelsea Jones, Shea Fyffe\",\"doi\":\"10.1177/10944281211011529\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.\",\"PeriodicalId\":19689,\"journal\":{\"name\":\"Organizational Research Methods\",\"volume\":\"25 1\",\"pages\":\"541 - 574\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2021-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1177/10944281211011529\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Organizational Research Methods\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1177/10944281211011529\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MANAGEMENT\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Organizational Research Methods","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1177/10944281211011529","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 2

摘要

策略捕获是一种广泛使用的技术,但策略捕获判断的时间稳定性长期以来一直是一个令人担忧的问题。本文强调了在政策获取研究中报告可靠性,特别是测试-再测试可靠性评估的重要性。我们发现955项政策捕获研究中只有164项(即17.17%)报告了重测信度估计。然后,我们对政策捕获研究进行了可靠性泛化荟萃分析,这些研究确实报告了测试-重新测试可靠性估计-我们获得了0.78的平均可靠性估计。我们还检查了16个潜在的方法学和实质性前因来重测信度(相当于效度泛化研究中的调节因子)。我们发现,在16个被检验的因素中,有14个的重测信度是稳健的,但纸笔研究的信度高于基于网络的研究,行为意图判断的信度高于其他(如态度和知觉)判断。我们为未来的研究提供了一个议程。最后,我们为研究人员(和期刊审稿人)提供了一些关于(a)报告测试重测信度的最佳实践建议,(b)为适当的报告文学设计政策捕获研究,以及(c)正确解释政策捕获研究中的测试重测信度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Test-Retest Reliability Generalization Meta-Analysis of Judgments Via the Policy-Capturing Technique
Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
23.20
自引率
3.20%
发文量
17
期刊介绍: Organizational Research Methods (ORM) was founded with the aim of introducing pertinent methodological advancements to researchers in organizational sciences. The objective of ORM is to promote the application of current and emerging methodologies to advance both theory and research practices. Articles are expected to be comprehensible to readers with a background consistent with the methodological and statistical training provided in contemporary organizational sciences doctoral programs. The text should be presented in a manner that facilitates accessibility. For instance, highly technical content should be placed in appendices, and authors are encouraged to include example data and computer code when relevant. Additionally, authors should explicitly outline how their contribution has the potential to advance organizational theory and research practice.
期刊最新文献
The Internet Never Forgets: A Four-Step Scraping Tutorial, Codebase, and Database for Longitudinal Organizational Website Data One Size Does Not Fit All: Unraveling Item Response Process Heterogeneity Using the Mixture Dominance-Unfolding Model (MixDUM) Taking It Easy: Off-the-Shelf Versus Fine-Tuned Supervised Modeling of Performance Appraisal Text Hello World! Building Computational Models to Represent Social and Organizational Theory The Effects of the Training Sample Size, Ground Truth Reliability, and NLP Method on Language-Based Automatic Interview Scores’ Psychometric Properties
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1