{"title":"Mapping the learning curves of deep learning networks.","authors":"Yanru Jiang, Rick Dale","doi":"10.1371/journal.pcbi.1012286","DOIUrl":null,"url":null,"abstract":"<p><p>There is an important challenge in systematically interpreting the internal representations of deep neural networks (DNNs). Existing techniques are often less effective for non-tabular tasks, or they primarily focus on qualitative, ad-hoc interpretations of models. In response, this study introduces a cognitive science-inspired, multi-dimensional quantification and visualization approach that captures two temporal dimensions of model learning: the \"information-processing trajectory\" and the \"developmental trajectory.\" The former represents the influence of incoming signals on an agent's decision-making, while the latter conceptualizes the gradual improvement in an agent's performance throughout its lifespan. Tracking the learning curves of DNNs enables researchers to explicitly identify the model appropriateness of a given task, examine the properties of the underlying input signals, and assess the model's alignment (or lack thereof) with human learning experiences. To illustrate this method, we conducted 750 runs of simulations on two temporal tasks: gesture detection and sentence classification, showcasing its applicability across different types of deep learning tasks. Using four descriptive metrics to quantify the mapped learning curves-start, end - start, max, tmax-, we identified significant differences in learning patterns based on data sources and class distinctions (all p's < .0001), the prominent role of spatial semantics in gesture learning, and larger information gains in language learning. We highlight three key insights gained from mapping learning curves: non-monotonic progress, pairwise comparisons, and domain distinctions. We reflect on the theoretical implications of this method for cognitive processing, language models and representations from multiple modalities.</p>","PeriodicalId":20241,"journal":{"name":"PLoS Computational Biology","volume":"21 2","pages":"e1012286"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS Computational Biology","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1371/journal.pcbi.1012286","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
There is an important challenge in systematically interpreting the internal representations of deep neural networks (DNNs). Existing techniques are often less effective for non-tabular tasks, or they primarily focus on qualitative, ad-hoc interpretations of models. In response, this study introduces a cognitive science-inspired, multi-dimensional quantification and visualization approach that captures two temporal dimensions of model learning: the "information-processing trajectory" and the "developmental trajectory." The former represents the influence of incoming signals on an agent's decision-making, while the latter conceptualizes the gradual improvement in an agent's performance throughout its lifespan. Tracking the learning curves of DNNs enables researchers to explicitly identify the model appropriateness of a given task, examine the properties of the underlying input signals, and assess the model's alignment (or lack thereof) with human learning experiences. To illustrate this method, we conducted 750 runs of simulations on two temporal tasks: gesture detection and sentence classification, showcasing its applicability across different types of deep learning tasks. Using four descriptive metrics to quantify the mapped learning curves-start, end - start, max, tmax-, we identified significant differences in learning patterns based on data sources and class distinctions (all p's < .0001), the prominent role of spatial semantics in gesture learning, and larger information gains in language learning. We highlight three key insights gained from mapping learning curves: non-monotonic progress, pairwise comparisons, and domain distinctions. We reflect on the theoretical implications of this method for cognitive processing, language models and representations from multiple modalities.
期刊介绍:
PLOS Computational Biology features works of exceptional significance that further our understanding of living systems at all scales—from molecules and cells, to patient populations and ecosystems—through the application of computational methods. Readers include life and computational scientists, who can take the important findings presented here to the next level of discovery.
Research articles must be declared as belonging to a relevant section. More information about the sections can be found in the submission guidelines.
Research articles should model aspects of biological systems, demonstrate both methodological and scientific novelty, and provide profound new biological insights.
Generally, reliability and significance of biological discovery through computation should be validated and enriched by experimental studies. Inclusion of experimental validation is not required for publication, but should be referenced where possible. Inclusion of experimental validation of a modest biological discovery through computation does not render a manuscript suitable for PLOS Computational Biology.
Research articles specifically designated as Methods papers should describe outstanding methods of exceptional importance that have been shown, or have the promise to provide new biological insights. The method must already be widely adopted, or have the promise of wide adoption by a broad community of users. Enhancements to existing published methods will only be considered if those enhancements bring exceptional new capabilities.