Pub Date : 2020-06-01Epub Date: 2019-08-14DOI: 10.1007/s42113-019-00047-w
Adam W Broitman, Michael J Kahana, M Karl Healey
Longitudinal designs must deal with the confound between increasing age and increasing task experience (i.e., retest effects). Most existing methods for disentangling these factors rely on large sample sizes and are impractical for smaller scale projects. Here, we show that a measurement burst design combined with a model of retest effects can be used to study age-related change with modest sample sizes. A combined model of age-related change and retest-related effects was developed. In a simulation experiment, we show that with sample sizes as small as n = 8, the model can reliably detect age effects of the size reported in the longitudinal literature while avoiding false positives when there is no age effect. We applied the model to data from a measurement burst study in which eight subjects completed a burst of seven sessions of free recall every year for five years. Six additional subjects completed a burst only in years 1 and 5. They should, therefore, have smaller retest effects but equal age effects. The raw data suggested slight improvement in memory over five years. However, applying the model to the yearly-testing group revealed that a substantial positive retest effect was obscuring stability in memory performance. Supporting this finding, the control group showed a smaller retest effect but an equal age effect. Measurement burst designs combined with models of retest effects allow researchers to employ longitudinal designs in areas where previously only cross-sectional designs were feasible.
{"title":"Modeling Retest Effects in a Longitudinal Measurement Burst Study of Memory.","authors":"Adam W Broitman, Michael J Kahana, M Karl Healey","doi":"10.1007/s42113-019-00047-w","DOIUrl":"https://doi.org/10.1007/s42113-019-00047-w","url":null,"abstract":"<p><p>Longitudinal designs must deal with the confound between increasing age and increasing task experience (i.e., retest effects). Most existing methods for disentangling these factors rely on large sample sizes and are impractical for smaller scale projects. Here, we show that a measurement burst design combined with a model of retest effects can be used to study age-related change with modest sample sizes. A combined model of age-related change and retest-related effects was developed. In a simulation experiment, we show that with sample sizes as small as <i>n</i> = 8, the model can reliably detect age effects of the size reported in the longitudinal literature while avoiding false positives when there is no age effect. We applied the model to data from a measurement burst study in which eight subjects completed a burst of seven sessions of free recall every year for five years. Six additional subjects completed a burst only in years 1 and 5. They should, therefore, have smaller retest effects but equal age effects. The raw data suggested slight improvement in memory over five years. However, applying the model to the yearly-testing group revealed that a substantial positive retest effect was obscuring stability in memory performance. Supporting this finding, the control group showed a smaller retest effect but an equal age effect. Measurement burst designs combined with models of retest effects allow researchers to employ longitudinal designs in areas where previously only cross-sectional designs were feasible.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"3 2","pages":"200-207"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s42113-019-00047-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38680054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01Epub Date: 2019-12-18DOI: 10.1007/s42113-019-00071-w
Len P L Jacob, David E Huber
Huber and O'Reilly (2003) proposed that neural habituation aids perceptual processing, separating neural responses to currently viewed objects from recently viewed objects. However, synaptic depression has costs, producing repetition deficits. Prior work confirmed the transition from repetition benefits to deficits with increasing duration of a prime object, but the prediction of enhanced novelty detection was not tested. The current study examined this prediction with a same/different word priming task, using support vector machine (SVM) classification of EEG data, ERP analyses focused on the N400, and dynamic neural network simulations fit to behavioral data to provide a priori predictions of the ERP effects. Subjects made same/different judgements to a response word in relation to an immediately preceding brief target word; prime durations were short (50ms) or long (400ms), and long durations decreased P100/N170 responses to the target word, suggesting that this manipulation increased habituation. Following long duration primes, correct "different" judgments of primed response words increased, evidencing enhanced novelty detection. An SVM classifier predicted trial-by-trial behavior with 66.34% accuracy on held-out data, with greatest predictive power at a time pattern consistent with the N400. The habituation model was augmented with a maintained semantics layer (i.e., working memory) to generate behavior and N400 predictions. A second experiment used response-locked ERPs, confirming the model's assumption that residual activation in working memory is the basis of novelty decisions. These results support the theory that neural habituation enhances novelty detection, and the model assumption that the N400 reflects updating of semantic information in working memory.
{"title":"Neural habituation enhances novelty detection: an EEG study of rapidly presented words.","authors":"Len P L Jacob, David E Huber","doi":"10.1007/s42113-019-00071-w","DOIUrl":"10.1007/s42113-019-00071-w","url":null,"abstract":"<p><p>Huber and O'Reilly (2003) proposed that neural habituation aids perceptual processing, separating neural responses to currently viewed objects from recently viewed objects. However, synaptic depression has costs, producing repetition deficits. Prior work confirmed the transition from repetition benefits to deficits with increasing duration of a prime object, but the prediction of enhanced novelty detection was not tested. The current study examined this prediction with a same/different word priming task, using support vector machine (SVM) classification of EEG data, ERP analyses focused on the N400, and dynamic neural network simulations fit to behavioral data to provide a priori predictions of the ERP effects. Subjects made same/different judgements to a response word in relation to an immediately preceding brief target word; prime durations were short (50ms) or long (400ms), and long durations decreased P100/N170 responses to the target word, suggesting that this manipulation increased habituation. Following long duration primes, correct \"different\" judgments of primed response words increased, evidencing enhanced novelty detection. An SVM classifier predicted trial-by-trial behavior with 66.34% accuracy on held-out data, with greatest predictive power at a time pattern consistent with the N400. The habituation model was augmented with a maintained semantics layer (i.e., working memory) to generate behavior and N400 predictions. A second experiment used response-locked ERPs, confirming the model's assumption that residual activation in working memory is the basis of novelty decisions. These results support the theory that neural habituation enhances novelty detection, and the model assumption that the N400 reflects updating of semantic information in working memory.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"3 2","pages":"208-227"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7447193/pdf/nihms-1546975.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38414587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-15DOI: 10.1007/s42113-020-00077-9
Marco Ragni, P. Johnson-Laird
{"title":"Explanation or Modeling: a Reply to Kellen and Klauer","authors":"Marco Ragni, P. Johnson-Laird","doi":"10.1007/s42113-020-00077-9","DOIUrl":"https://doi.org/10.1007/s42113-020-00077-9","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"4 1","pages":"354 - 361"},"PeriodicalIF":0.0,"publicationDate":"2020-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74798285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-10DOI: 10.1007/s42113-021-00103-4
G. Calcagni, Justin A. Harris, R. Pellón
{"title":"Beyond Rescorla–Wagner: the Ups and Downs of Learning","authors":"G. Calcagni, Justin A. Harris, R. Pellón","doi":"10.1007/s42113-021-00103-4","DOIUrl":"https://doi.org/10.1007/s42113-021-00103-4","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"94 1","pages":"355 - 379"},"PeriodicalIF":0.0,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74241732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1007/s42113-020-00079-7
Giwon Bahg, P. Sederberg, Jay I. Myung, Xiangrui Li, M. Pitt, Zhong-Lin Lu, Brandon M. Turner
{"title":"Real-time Adaptive Design Optimization Within Functional MRI Experiments","authors":"Giwon Bahg, P. Sederberg, Jay I. Myung, Xiangrui Li, M. Pitt, Zhong-Lin Lu, Brandon M. Turner","doi":"10.1007/s42113-020-00079-7","DOIUrl":"https://doi.org/10.1007/s42113-020-00079-7","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"9 1","pages":"400 - 429"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73140436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1007/s42113-020-00086-8
David Kellen, K. C. Klauer
{"title":"Modeling the Wason Selection Task: a Response to Ragni and Johnson-Laird (2020)","authors":"David Kellen, K. C. Klauer","doi":"10.1007/s42113-020-00086-8","DOIUrl":"https://doi.org/10.1007/s42113-020-00086-8","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"25 1","pages":"362 - 367"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83226915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-30DOI: 10.1007/s42113-020-00075-x
D. Matzke, G. Logan, A. Heathcote
{"title":"A Cautionary Note on Evidence-Accumulation Models of Response Inhibition in the Stop-Signal Paradigm","authors":"D. Matzke, G. Logan, A. Heathcote","doi":"10.1007/s42113-020-00075-x","DOIUrl":"https://doi.org/10.1007/s42113-020-00075-x","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"40 1","pages":"269 - 288"},"PeriodicalIF":0.0,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75736645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-27DOI: 10.1007/s42113-020-00078-8
Andrea M. Cataldo, A. Cohen
{"title":"Modeling Preference Reversals in Context Effects over Time","authors":"Andrea M. Cataldo, A. Cohen","doi":"10.1007/s42113-020-00078-8","DOIUrl":"https://doi.org/10.1007/s42113-020-00078-8","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"46 1","pages":"101 - 123"},"PeriodicalIF":0.0,"publicationDate":"2020-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80400723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-26DOI: 10.1007/s42113-020-00076-w
D. Kunkel, Zhifei Yan, P. Craigmile, M. Peruggia, T. Van Zandt
{"title":"Hierarchical Hidden Markov Models for Response Time Data","authors":"D. Kunkel, Zhifei Yan, P. Craigmile, M. Peruggia, T. Van Zandt","doi":"10.1007/s42113-020-00076-w","DOIUrl":"https://doi.org/10.1007/s42113-020-00076-w","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"419 1","pages":"70 - 86"},"PeriodicalIF":0.0,"publicationDate":"2020-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76629596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.
{"title":"Generalization at Retrieval Using Associative Networks with Transient Weight Changes","authors":"Kevin D. Shabahang, H. Yim, S. Dennis","doi":"10.31234/osf.io/3nzgh","DOIUrl":"https://doi.org/10.31234/osf.io/3nzgh","url":null,"abstract":"Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"6 1","pages":"124-155"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88739560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}