{"title":"神经科学中的解释模型,第 2 部分:功能可理解性和差异原则","authors":"Rosa Cao , Daniel Yamins","doi":"10.1016/j.cogsys.2023.101200","DOIUrl":null,"url":null,"abstract":"<div><p>Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally “top-down”, as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain <em>why</em> brain systems are as they are — because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation — one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.</p></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"85 ","pages":"Article 101200"},"PeriodicalIF":2.1000,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explanatory models in neuroscience, Part 2: Functional intelligibility and the contravariance principle\",\"authors\":\"Rosa Cao , Daniel Yamins\",\"doi\":\"10.1016/j.cogsys.2023.101200\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally “top-down”, as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain <em>why</em> brain systems are as they are — because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation — one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.</p></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"85 \",\"pages\":\"Article 101200\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041723001341\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041723001341","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Explanatory models in neuroscience, Part 2: Functional intelligibility and the contravariance principle
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally “top-down”, as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain why brain systems are as they are — because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation — one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.