Explainable neural networks that simulate reasoning

IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Nature computational science Pub Date : 2021-09-22 DOI:10.1038/s43588-021-00132-w
Paul J. Blazek, Milo M. Lin
{"title":"Explainable neural networks that simulate reasoning","authors":"Paul J. Blazek, Milo M. Lin","doi":"10.1038/s43588-021-00132-w","DOIUrl":null,"url":null,"abstract":"The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence. The authors demonstrate how neural systems can encode cognitive functions, and use the proposed model to train robust, scalable deep neural networks that are explainable and capable of symbolic reasoning and domain generalization.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"1 9","pages":"607-618"},"PeriodicalIF":12.0000,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature computational science","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s43588-021-00132-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 14

Abstract

The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence. The authors demonstrate how neural systems can encode cognitive functions, and use the proposed model to train robust, scalable deep neural networks that are explainable and capable of symbolic reasoning and domain generalization.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
模拟推理的可解释神经网络
深度神经网络的成功表明,认知可能来自分布式神经活动的难以解读的模式。然而,这些网络只是模式匹配的黑盒子,无法模拟高级认知功能,也缺乏大量神经生物学特征。因此,它们目前还不足以成为理解神经信息处理的计算模型。在这里,我们展示了神经回路如何通过简单的神经生物学原理直接编码认知过程。为了说明这一点,我们在一种非梯度机器学习算法中实现了这一模型,以训练被称为本质神经网络(ENNs)的深度神经网络。即使是在基准计算机视觉任务中,本质神经网络中的神经信息处理也是可以解释的。本质神经网络还能模拟更高级的认知功能,如深思熟虑、符号推理和分布外概括。ENNs 显示了与大脑相关的网络特性,如模块化、分布式和局部发射以及对抗鲁棒性。ENNs 建立了一个广泛的计算框架,用于破译认知的神经基础和追求人工通用智能。作者展示了神经系统如何编码认知功能,并利用所提出的模型来训练鲁棒的、可扩展的深度神经网络,这些网络可解释并能够进行符号推理和领域泛化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
11.70
自引率
0.00%
发文量
0
期刊最新文献
Mapping the gene space at single-cell resolution with gene signal pattern analysis Cover runners-up of 2024 Simulation and assimilation of the digital human brain On the path toward brain-scale simulations Spatial modeling algorithms for reactions and transport in biological cells.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1