Pub Date : 2023-01-03DOI: 10.1007/s42113-022-00162-1
Marlou Nadine Perquin, Marieke K van Vugt, Craig Hedge, Aline Bompas
Human performance shows substantial endogenous variability over time, and this variability is a robust marker of individual differences. Of growing interest to psychologists is the realisation that variability is not fully random, but often exhibits temporal dependencies. However, their measurement and interpretation come with several controversies. Furthermore, their potential benefit for studying individual differences in healthy and clinical populations remains unclear. Here, we gather new and archival datasets featuring 11 sensorimotor and cognitive tasks across 526 participants, to examine individual differences in temporal structures. We first investigate intra-individual repeatability of the most common measures of temporal structures - to test their potential for capturing stable individual differences. Secondly, we examine inter-individual differences in these measures using: (1) task performance assessed from the same data, (2) meta-cognitive ratings of on-taskness from thought probes occasionally presented throughout the task, and (3) self-assessed attention-deficit related traits. Across all datasets, autocorrelation at lag 1 and Power Spectra Density slope showed high intra-individual repeatability across sessions and correlated with task performance. The Detrended Fluctuation Analysis slope showed the same pattern, but less reliably. The long-term component (d) of the ARFIMA(1,d,1) model showed poor repeatability and no correlation to performance. Overall, these measures failed to show external validity when correlated with either mean subjective attentional state or self-assessed traits between participants. Thus, some measures of serial dependencies may be stable individual traits, but their usefulness in capturing individual differences in other constructs typically associated with variability in performance seems limited. We conclude with comprehensive recommendations for researchers.
Supplementary information: The online version contains supplementary material available at 10.1007/s42113-022-00162-1.
{"title":"Temporal Structure in Sensorimotor Variability: A Stable Trait, But What For?","authors":"Marlou Nadine Perquin, Marieke K van Vugt, Craig Hedge, Aline Bompas","doi":"10.1007/s42113-022-00162-1","DOIUrl":"10.1007/s42113-022-00162-1","url":null,"abstract":"<p><p>Human performance shows substantial endogenous variability over time, and this variability is a robust marker of individual differences. Of growing interest to psychologists is the realisation that variability is not fully random, but often exhibits temporal dependencies. However, their measurement and interpretation come with several controversies. Furthermore, their potential benefit for studying individual differences in healthy and clinical populations remains unclear. Here, we gather new and archival datasets featuring 11 sensorimotor and cognitive tasks across 526 participants, to examine individual differences in temporal structures. We first investigate intra-individual repeatability of the most common measures of temporal structures - to test their potential for capturing stable individual differences. Secondly, we examine inter-individual differences in these measures using: (1) task performance assessed from the same data, (2) meta-cognitive ratings of on-taskness from thought probes occasionally presented throughout the task, and (3) self-assessed attention-deficit related traits. Across all datasets, autocorrelation at lag 1 and Power Spectra Density slope showed high intra-individual repeatability across sessions and correlated with task performance. The Detrended Fluctuation Analysis slope showed the same pattern, but less reliably. The long-term component (<i>d</i>) of the ARFIMA(1,d,1) model showed poor repeatability and no correlation to performance. Overall, these measures failed to show external validity when correlated with either mean subjective attentional state or self-assessed traits between participants. Thus, some measures of serial dependencies may be stable individual traits, but their usefulness in capturing individual differences in other constructs typically associated with variability in performance seems limited. We conclude with comprehensive recommendations for researchers.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s42113-022-00162-1.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":" ","pages":"1-38"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9810256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10564231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-02-14DOI: 10.1007/s42113-022-00158-x
Johnny van Doorn, Frederik Aust, Julia M Haaf, Angelika M Stefan, Eric-Jan Wagenmakers
In van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions. Surprisingly perhaps, the experts disagreed (often strongly) on what is best practice-a testament to the intricacy of conducting a mixed effect model comparison. Here, we provide our perspective on these comments and highlight topics that warrant further discussion. In general, we agree with many of the commentaries that in order to take full advantage of Bayesian mixed model comparison, it is important to be aware of the specific assumptions that underlie the to-be-compared models.
在 van Doorn 等人(2021 年)的文章中,我们概述了有关混合效应模型比较的贝叶斯因子的一系列开放性问题,重点是聚合的影响、测量误差的影响、先验分布的选择以及交互作用的检测。七份专家评论(部分)涉及了这些初步问题。出人意料的是,专家们对最佳做法的意见并不一致(通常是强烈的意见不一致),这证明了进行混合效应模型比较的复杂性。在此,我们将对这些意见提出自己的看法,并强调值得进一步讨论的话题。总的来说,我们同意许多评论的观点,即要充分利用贝叶斯混合模型比较的优势,就必须了解作为待比较模型基础的具体假设。
{"title":"Bayes Factors for Mixed Models: Perspective on Responses.","authors":"Johnny van Doorn, Frederik Aust, Julia M Haaf, Angelika M Stefan, Eric-Jan Wagenmakers","doi":"10.1007/s42113-022-00158-x","DOIUrl":"10.1007/s42113-022-00158-x","url":null,"abstract":"<p><p>In van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions. Surprisingly perhaps, the experts disagreed (often strongly) on what is best practice-a testament to the intricacy of conducting a mixed effect model comparison. Here, we provide our perspective on these comments and highlight topics that warrant further discussion. In general, we agree with many of the commentaries that in order to take full advantage of Bayesian mixed model comparison, it is important to be aware of the specific assumptions that underlie the to-be-compared models.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"6 1","pages":"127-139"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9981503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9424467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-23DOI: 10.1007/s42113-022-00137-2
K. Damaso, Paul G. Williams, A. Heathcote
{"title":"What Happens After a Fast Versus Slow Error, and How Does It Relate to Evidence Accumulation?","authors":"K. Damaso, Paul G. Williams, A. Heathcote","doi":"10.1007/s42113-022-00137-2","DOIUrl":"https://doi.org/10.1007/s42113-022-00137-2","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"26 1","pages":"527 - 546"},"PeriodicalIF":0.0,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78773371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.1007/s42113-022-00156-z
Simon Valentin, Neil R. Bramley, Christopher G. Lucas
{"title":"Discovering Common Hidden Causes in Sequences of Events","authors":"Simon Valentin, Neil R. Bramley, Christopher G. Lucas","doi":"10.1007/s42113-022-00156-z","DOIUrl":"https://doi.org/10.1007/s42113-022-00156-z","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"38 1","pages":"377 - 399"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75621272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.1007/s42113-022-00154-1
A. Ly, E. Wagenmakers
{"title":"Measure-Theoretic Musings Cannot Salvage the Full Bayesian Significance Test as a Measure of Evidence","authors":"A. Ly, E. Wagenmakers","doi":"10.1007/s42113-022-00154-1","DOIUrl":"https://doi.org/10.1007/s42113-022-00154-1","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"28 1","pages":"583-589"},"PeriodicalIF":0.0,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80376655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1007/s42113-022-00155-0
Jacob VanDrunen, Kevin Nam, Mark Beers, Z. Pizlo
{"title":"Traveling Salesperson Problem with Simple Obstacles: The Role of Multidimensional Scaling and the Role of Clustering","authors":"Jacob VanDrunen, Kevin Nam, Mark Beers, Z. Pizlo","doi":"10.1007/s42113-022-00155-0","DOIUrl":"https://doi.org/10.1007/s42113-022-00155-0","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"20 1","pages":"513 - 525"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76114062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-15DOI: 10.1007/s42113-022-00152-3
J. Veríssimo
{"title":"When Fixed and Random Effects Mismatch: Another Case of Inflation of Evidence in Non-Maximal Models","authors":"J. Veríssimo","doi":"10.1007/s42113-022-00152-3","DOIUrl":"https://doi.org/10.1007/s42113-022-00152-3","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"77 1","pages":"84-101"},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74156970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01Epub Date: 2022-06-13DOI: 10.1007/s42113-022-00143-4
Akash Umakantha, Braden A Purcell, Thomas J Palmeri
Many models of decision making assume accumulation of evidence to threshold as a core mechanism to predict response probabilities and response times. A spiking neural network model (Wang, 2002) instantiates these mechanisms at the level of biophysically-plausible pools of neurons with excitatory and inhibitory connections, and has numerous model parameters tuned by physiological measures. The diffusion model (Ratcliff, 1978) is a cognitive model that can be fitted to a range of behaviors and conditions. We investigated how parameters of the cognitive-level diffusion model relate to the parameters of a neural-level spiking model. In each simulated "experiment", we generated "data" from the spiking neural network by factorially combining a manipulation of choice difficulty (via the input to the spiking model) and a manipulation of one of the core parameters of the spiking model. We then fitted the diffusion model to these simulated data to observe how manipulation of each core spiking model parameter mapped on to fitted drift rate, response threshold, and non-decision time. Manipulations of parameters in the spiking model related to input sensitivity, threshold, and stimulus processing time mapped on to their conceptual analogues in the diffusion model, namely drift rate, threshold, and non-decision time. Manipulations of parameters in the spiking model with no direct analogue to the diffusion model, non-stimulus-specific background input, strength of recurrent excitation, and receptor conductances, mapped on to threshold in the diffusion model. We discuss implications of these results for interpretations of fits of the diffusion model to behavioral data.
{"title":"Relating a Spiking Neural Network Model and the Diffusion Model of Decision-Making.","authors":"Akash Umakantha, Braden A Purcell, Thomas J Palmeri","doi":"10.1007/s42113-022-00143-4","DOIUrl":"10.1007/s42113-022-00143-4","url":null,"abstract":"<p><p>Many models of decision making assume accumulation of evidence to threshold as a core mechanism to predict response probabilities and response times. A spiking neural network model (Wang, 2002) instantiates these mechanisms at the level of biophysically-plausible pools of neurons with excitatory and inhibitory connections, and has numerous model parameters tuned by physiological measures. The diffusion model (Ratcliff, 1978) is a cognitive model that can be fitted to a range of behaviors and conditions. We investigated how parameters of the cognitive-level diffusion model relate to the parameters of a neural-level spiking model. In each simulated \"experiment\", we generated \"data\" from the spiking neural network by factorially combining a manipulation of choice difficulty (via the input to the spiking model) and a manipulation of one of the core parameters of the spiking model. We then fitted the diffusion model to these simulated data to observe how manipulation of each core spiking model parameter mapped on to fitted drift rate, response threshold, and non-decision time. Manipulations of parameters in the spiking model related to input sensitivity, threshold, and stimulus processing time mapped on to their conceptual analogues in the diffusion model, namely drift rate, threshold, and non-decision time. Manipulations of parameters in the spiking model with no direct analogue to the diffusion model, non-stimulus-specific background input, strength of recurrent excitation, and receptor conductances, mapped on to threshold in the diffusion model. We discuss implications of these results for interpretations of fits of the diffusion model to behavioral data.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"5 3","pages":"279-301"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9673774/pdf/nihms-1830711.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40480290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01Epub Date: 2022-06-07DOI: 10.1007/s42113-022-00138-1
Michael D Lee, Percy K Mistry, Vinod Menon
The -back task is a widely used behavioral task for measuring working memory and the ability to inhibit interfering information. We develop a novel model of the commonly used 2-back task using the cognitive psychometric framework provided by Multinomial Processing Trees. Our model involves three parameters: a memory parameter, corresponding to how well an individual encodes and updates sequence information about presented stimuli; a decision parameter corresponding to how well participants execute choices based on information stored in memory; and a base-rate parameter corresponding to bias for responding "yes" or "no". We test the parameter recovery properties of the model using existing 2-back experimental designs, and demonstrate the application of the model to two previous data sets: one from social psychology involving faces corresponding to different races (Stelter and Degner, British Journal of Psychology 109:777-798, 2018), and one from cognitive neuroscience involving more than 1000 participants from the Human Connectome Project (Van Essen et al., Neuroimage 80:62-79, 2013). We demonstrate that the model can be used to infer interpretable individual-level parameters. We develop a hierarchical extension of the model to test differences between stimulus conditions, comparing faces of different races, and comparing face to non-face stimuli. We also develop a multivariate regression extension to examine the relationship between the model parameters and individual performance on standardized cognitive measures including the List Sorting and Flanker tasks. We conclude by discussing how our model can be used to dissociate underlying cognitive processes such as encoding failures, inhibition failures, and binding failures.
{"title":"A Multinomial Processing Tree Model of the 2-back Working Memory Task.","authors":"Michael D Lee, Percy K Mistry, Vinod Menon","doi":"10.1007/s42113-022-00138-1","DOIUrl":"10.1007/s42113-022-00138-1","url":null,"abstract":"<p><p>The <math><mi>n</mi></math>-back task is a widely used behavioral task for measuring working memory and the ability to inhibit interfering information. We develop a novel model of the commonly used 2-back task using the cognitive psychometric framework provided by Multinomial Processing Trees. Our model involves three parameters: a memory parameter, corresponding to how well an individual encodes and updates sequence information about presented stimuli; a decision parameter corresponding to how well participants execute choices based on information stored in memory; and a base-rate parameter corresponding to bias for responding \"yes\" or \"no\". We test the parameter recovery properties of the model using existing 2-back experimental designs, and demonstrate the application of the model to two previous data sets: one from social psychology involving faces corresponding to different races (Stelter and Degner, <i>British Journal of Psychology</i> 109:777-798, 2018), and one from cognitive neuroscience involving more than 1000 participants from the Human Connectome Project (Van Essen et al., <i>Neuroimage</i> 80:62-79, 2013). We demonstrate that the model can be used to infer interpretable individual-level parameters. We develop a hierarchical extension of the model to test differences between stimulus conditions, comparing faces of different races, and comparing face to non-face stimuli. We also develop a multivariate regression extension to examine the relationship between the model parameters and individual performance on standardized cognitive measures including the List Sorting and Flanker tasks. We conclude by discussing how our model can be used to dissociate underlying cognitive processes such as encoding failures, inhibition failures, and binding failures.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"5 3","pages":"261-278"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10593202/pdf/nihms-1890058.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49694796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}