评估认知模型参数估计的稳健性:在多种估算方法中对多叉处理树模型进行元分析回顾。

IF 17.3 1区 心理学 Q1 PSYCHOLOGY Psychological bulletin Pub Date : 2024-08-01 Epub Date: 2024-06-27 DOI:10.1037/bul0000434
Henrik Singmann, Daniel W Heck, Marius Barth, Edgar Erdfelder, Nina R Arnold, Frederik Aust, Jimmy Calanchini, Fabian E Gümüsdagli, Sebastian S Horn, David Kellen, Karl C Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G Kuhlmann, Julia Groß
{"title":"评估认知模型参数估计的稳健性:在多种估算方法中对多叉处理树模型进行元分析回顾。","authors":"Henrik Singmann, Daniel W Heck, Marius Barth, Edgar Erdfelder, Nina R Arnold, Frederik Aust, Jimmy Calanchini, Fabian E Gümüsdagli, Sebastian S Horn, David Kellen, Karl C Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G Kuhlmann, Julia Groß","doi":"10.1037/bul0000434","DOIUrl":null,"url":null,"abstract":"<p><p>Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a <i>multiverse</i> of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the <i>magnitude of divergence</i> between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential <i>sources of divergence</i>. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":" ","pages":"965-1003"},"PeriodicalIF":17.3000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the robustness of parameter estimates in cognitive models: A meta-analytic review of multinomial processing tree models across the multiverse of estimation methods.\",\"authors\":\"Henrik Singmann, Daniel W Heck, Marius Barth, Edgar Erdfelder, Nina R Arnold, Frederik Aust, Jimmy Calanchini, Fabian E Gümüsdagli, Sebastian S Horn, David Kellen, Karl C Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G Kuhlmann, Julia Groß\",\"doi\":\"10.1037/bul0000434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a <i>multiverse</i> of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the <i>magnitude of divergence</i> between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential <i>sources of divergence</i>. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>\",\"PeriodicalId\":20854,\"journal\":{\"name\":\"Psychological bulletin\",\"volume\":\" \",\"pages\":\"965-1003\"},\"PeriodicalIF\":17.3000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological bulletin\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/bul0000434\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/6/27 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological bulletin","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/bul0000434","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

研究人员越来越意识到,数据分析决策会影响结果。在此,我们针对多叉处理树(MPT)模型系统地研究了这一问题,该模型是一类流行的分类数据认知模型。具体来说,我们研究了 MPT 模型参数估计的稳健性,这源于两个重要的决策:数据聚合水平(完全聚合、无聚合或部分聚合)和统计框架(频繁主义或贝叶斯)。这些决定涉及多种估算方法。我们采用元分析策略综合了来自 13956 名参与者(164 个已发表数据集)的数据,并分析了九种心理学常用 MPT 模型(如过程-解离、源监控)参数估计方法之间的差异程度。我们进一步研究了作为分歧潜在来源的调节因素。我们发现,估计方法之间的绝对分歧平均较小 (
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Evaluating the robustness of parameter estimates in cognitive models: A meta-analytic review of multinomial processing tree models across the multiverse of estimation methods.

Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Psychological bulletin
Psychological bulletin 医学-心理学
CiteScore
33.60
自引率
0.90%
发文量
21
期刊介绍: Psychological Bulletin publishes syntheses of research in scientific psychology. Research syntheses seek to summarize past research by drawing overall conclusions from many separate investigations that address related or identical hypotheses. A research synthesis typically presents the authors' assessments: -of the state of knowledge concerning the relations of interest; -of critical assessments of the strengths and weaknesses in past research; -of important issues that research has left unresolved, thereby directing future research so it can yield a maximum amount of new information.
期刊最新文献
Reporting bias, not external focus: A robust Bayesian meta-analysis and systematic review of the external focus of attention literature. Supporting the status quo is weakly associated with subjective well-being: A comparison of the palliative function of ideology across social status groups using a meta-analytic approach. When connecting with LGBTQ+ communities helps and why it does: A meta-analysis of the relationship between connectedness and health-related outcomes. Who am I? A second-order meta-analytic review of correlates of the self in childhood and adolescence. Defining social reward: A systematic review of human and animal studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1