Henrik Singmann, Daniel W Heck, Marius Barth, Edgar Erdfelder, Nina R Arnold, Frederik Aust, Jimmy Calanchini, Fabian E Gümüsdagli, Sebastian S Horn, David Kellen, Karl C Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G Kuhlmann, Julia Groß
{"title":"评估认知模型参数估计的稳健性:在多种估算方法中对多叉处理树模型进行元分析回顾。","authors":"Henrik Singmann, Daniel W Heck, Marius Barth, Edgar Erdfelder, Nina R Arnold, Frederik Aust, Jimmy Calanchini, Fabian E Gümüsdagli, Sebastian S Horn, David Kellen, Karl C Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G Kuhlmann, Julia Groß","doi":"10.1037/bul0000434","DOIUrl":null,"url":null,"abstract":"<p><p>Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a <i>multiverse</i> of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the <i>magnitude of divergence</i> between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential <i>sources of divergence</i>. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":" ","pages":"965-1003"},"PeriodicalIF":17.3000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the robustness of parameter estimates in cognitive models: A meta-analytic review of multinomial processing tree models across the multiverse of estimation methods.\",\"authors\":\"Henrik Singmann, Daniel W Heck, Marius Barth, Edgar Erdfelder, Nina R Arnold, Frederik Aust, Jimmy Calanchini, Fabian E Gümüsdagli, Sebastian S Horn, David Kellen, Karl C Klauer, Dora Matzke, Franziska Meissner, Martha Michalkiewicz, Marie Luisa Schaper, Christoph Stahl, Beatrice G Kuhlmann, Julia Groß\",\"doi\":\"10.1037/bul0000434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a <i>multiverse</i> of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the <i>magnitude of divergence</i> between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential <i>sources of divergence</i>. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>\",\"PeriodicalId\":20854,\"journal\":{\"name\":\"Psychological bulletin\",\"volume\":\" \",\"pages\":\"965-1003\"},\"PeriodicalIF\":17.3000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological bulletin\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/bul0000434\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/6/27 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological bulletin","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/bul0000434","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
Evaluating the robustness of parameter estimates in cognitive models: A meta-analytic review of multinomial processing tree models across the multiverse of estimation methods.
Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
Psychological Bulletin publishes syntheses of research in scientific psychology. Research syntheses seek to summarize past research by drawing overall conclusions from many separate investigations that address related or identical hypotheses.
A research synthesis typically presents the authors' assessments:
-of the state of knowledge concerning the relations of interest;
-of critical assessments of the strengths and weaknesses in past research;
-of important issues that research has left unresolved, thereby directing future research so it can yield a maximum amount of new information.