{"title":"Brain-Training Pessimism, but Applied-Memory Optimism","authors":"J. McCabe, Thomas S. Redick, R. Engle","doi":"10.1177/1529100616664716","DOIUrl":null,"url":null,"abstract":"As is convincingly demonstrated in the target article (Simons et al., 2016, this issue), despite the numerous forms of brain training that have been tested and touted in the past 15 years, there’s little to no evidence that currently existing programs produce lasting, meaningful change in the performance of cognitive tasks that differ from the trained tasks. As detailed by Simons et al., numerous methodological issues cloud the interpretation of many studies claiming successful far transfer. These limitations include small sample sizes, passive control groups, single tests of outcomes, unblinded informantand self-report measures of functioning, and hypothesisinconsistent significant effects. (However, note that, with older adults, a successful result of the intervention could be to prevent decline in the training group, such that they stay at their pretest level while the control group declines.) These issues are separate from problems related to publication bias, selective reporting of significant and nonsignificant outcomes, use of unjustified one-tailed t tests, and failure to explicitly note shared data across publications. So, considering that the literature contains such potential false-positive publications (Simmons, Nelson, & Simonsohn, 2011), it may be surprising and disheartening to many that some descriptive reviews (Chacko et al., 2013; Salthouse, 2006; Simons et al., 2016) and meta-analyses (Melby-Lervåg, Redick, & Hulme, 2016; Rapport, Orban, Kofler, & Friedman, 2013) have concluded that existing cognitive-training methods are relatively ineffective, despite their popularity and increasing market share. For example, a recent working-memory-training metaanalysis (Melby-Lervåg et al., 2016) evaluated 87 studies examining transfer to working memory, intelligence, and various educationally relevant outcomes (e.g., reading comprehension, math, word decoding). The studies varied considerably in terms of the sample composition (age; typical vs. atypical functioning) and the nature of the working-memory training (verbal, nonverbal, or both verbal and nonverbal stimuli; n-back vs. span task methodology; few vs. many training sessions). Despite the diversity in the design and administration of the training, the results were quite clear. Following training, there were reliable improvements in performance on verbal and nonverbal working-memory tasks identical or similar to the trained tasks. However, in terms of far transfer, there was no convincing evidence of improvements, especially when working-memory training was compared to an active-control condition. The meta-analysis also demonstrated that, in the working-memory-training literature, the largest nonverbal-intelligence far-transfer effects are statistically more likely to come from studies with small sample sizes and passive control groups. This finding is not particularly surprising, given other work showing that most working-memory training studies are dramatically underpowered (Bogg & Lasecki, 2015) and that underpowered studies with small sample sizes are more likely to produce inflated effect sizes (Button et al., 2013). In addition, small samples are predominantly the reason irregular pretest-posttest patterns have been observed in the control groups in various working-memory and video-game intervention studies (for review, see Redick, 2015; Redick & Webster, 2014). In these studies, inferential statistics and effect-size metrics provide evidence that the training “worked,” but investigation of the descriptive statistics tells a different story. Specifically, a number of studies with children and young adult samples have examined intelligence or other academic achievement outcomes before and after training. Closer inspection indicates that training “improved” intelligence or academic achievement relative to the control condition because the control group declined from pretest to posttest—that is, the training group did not significantly change from pretest to posttest. 664716 PSIXXX10.1177/1529100616664716McCabe et al.Brain-Training Pessimism research-article2016","PeriodicalId":18,"journal":{"name":"ACS Macro Letters","volume":null,"pages":null},"PeriodicalIF":5.1000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1529100616664716","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Macro Letters","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/1529100616664716","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"POLYMER SCIENCE","Score":null,"Total":0}
引用次数: 28
Abstract
As is convincingly demonstrated in the target article (Simons et al., 2016, this issue), despite the numerous forms of brain training that have been tested and touted in the past 15 years, there’s little to no evidence that currently existing programs produce lasting, meaningful change in the performance of cognitive tasks that differ from the trained tasks. As detailed by Simons et al., numerous methodological issues cloud the interpretation of many studies claiming successful far transfer. These limitations include small sample sizes, passive control groups, single tests of outcomes, unblinded informantand self-report measures of functioning, and hypothesisinconsistent significant effects. (However, note that, with older adults, a successful result of the intervention could be to prevent decline in the training group, such that they stay at their pretest level while the control group declines.) These issues are separate from problems related to publication bias, selective reporting of significant and nonsignificant outcomes, use of unjustified one-tailed t tests, and failure to explicitly note shared data across publications. So, considering that the literature contains such potential false-positive publications (Simmons, Nelson, & Simonsohn, 2011), it may be surprising and disheartening to many that some descriptive reviews (Chacko et al., 2013; Salthouse, 2006; Simons et al., 2016) and meta-analyses (Melby-Lervåg, Redick, & Hulme, 2016; Rapport, Orban, Kofler, & Friedman, 2013) have concluded that existing cognitive-training methods are relatively ineffective, despite their popularity and increasing market share. For example, a recent working-memory-training metaanalysis (Melby-Lervåg et al., 2016) evaluated 87 studies examining transfer to working memory, intelligence, and various educationally relevant outcomes (e.g., reading comprehension, math, word decoding). The studies varied considerably in terms of the sample composition (age; typical vs. atypical functioning) and the nature of the working-memory training (verbal, nonverbal, or both verbal and nonverbal stimuli; n-back vs. span task methodology; few vs. many training sessions). Despite the diversity in the design and administration of the training, the results were quite clear. Following training, there were reliable improvements in performance on verbal and nonverbal working-memory tasks identical or similar to the trained tasks. However, in terms of far transfer, there was no convincing evidence of improvements, especially when working-memory training was compared to an active-control condition. The meta-analysis also demonstrated that, in the working-memory-training literature, the largest nonverbal-intelligence far-transfer effects are statistically more likely to come from studies with small sample sizes and passive control groups. This finding is not particularly surprising, given other work showing that most working-memory training studies are dramatically underpowered (Bogg & Lasecki, 2015) and that underpowered studies with small sample sizes are more likely to produce inflated effect sizes (Button et al., 2013). In addition, small samples are predominantly the reason irregular pretest-posttest patterns have been observed in the control groups in various working-memory and video-game intervention studies (for review, see Redick, 2015; Redick & Webster, 2014). In these studies, inferential statistics and effect-size metrics provide evidence that the training “worked,” but investigation of the descriptive statistics tells a different story. Specifically, a number of studies with children and young adult samples have examined intelligence or other academic achievement outcomes before and after training. Closer inspection indicates that training “improved” intelligence or academic achievement relative to the control condition because the control group declined from pretest to posttest—that is, the training group did not significantly change from pretest to posttest. 664716 PSIXXX10.1177/1529100616664716McCabe et al.Brain-Training Pessimism research-article2016
期刊介绍:
ACS Macro Letters publishes research in all areas of contemporary soft matter science in which macromolecules play a key role, including nanotechnology, self-assembly, supramolecular chemistry, biomaterials, energy generation and storage, and renewable/sustainable materials. Submissions to ACS Macro Letters should justify clearly the rapid disclosure of the key elements of the study. The scope of the journal includes high-impact research of broad interest in all areas of polymer science and engineering, including cross-disciplinary research that interfaces with polymer science.
With the launch of ACS Macro Letters, all Communications that were formerly published in Macromolecules and Biomacromolecules will be published as Letters in ACS Macro Letters.