Distributed decomposition architectures for neural decision-makers

S. Mukhopadhyay, Haiying Wang
{"title":"Distributed decomposition architectures for neural decision-makers","authors":"S. Mukhopadhyay, Haiying Wang","doi":"10.1109/CDC.1999.831326","DOIUrl":null,"url":null,"abstract":"There is a growing interest in the neural networks community to employ systems consisting of multiple small neural decision-making modules, instead of a single large monolithic one. Motivated by such interests and other studies of distributed decision architectures in large-scale systems theory, we propose two feature decomposition models (parallel and tandem) for interconnecting multiple neural networks. In both these models, the overall feature set is partitioned into several disjoint subsets so that each subset is processed by a separate neural network. In the parallel interconnection, there is no communication between the decision-makers during the decision-making process, and their outputs are combined by a combining or fusion function to generate overall decisions. In contrast, a tandem connection of two networks (for illustration purposes) requires that the outputs of one (the leader) form additional inputs of the other (the follower), and the output of the latter determines the overall decision. A feature decomposition algorithm is presented to decide how to partition the total feature set between the individual modules, based on training data. The problem of learning (and the necessary information flow) in the two distributed architectures is examined. Finally, the performance of a feature decomposition distributed model is compared with that of a single monolithic network in a bench-mark real-world pattern recognition problem to illustrate the advantages of the distributed approach.","PeriodicalId":137513,"journal":{"name":"Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1999-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.1999.831326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

There is a growing interest in the neural networks community to employ systems consisting of multiple small neural decision-making modules, instead of a single large monolithic one. Motivated by such interests and other studies of distributed decision architectures in large-scale systems theory, we propose two feature decomposition models (parallel and tandem) for interconnecting multiple neural networks. In both these models, the overall feature set is partitioned into several disjoint subsets so that each subset is processed by a separate neural network. In the parallel interconnection, there is no communication between the decision-makers during the decision-making process, and their outputs are combined by a combining or fusion function to generate overall decisions. In contrast, a tandem connection of two networks (for illustration purposes) requires that the outputs of one (the leader) form additional inputs of the other (the follower), and the output of the latter determines the overall decision. A feature decomposition algorithm is presented to decide how to partition the total feature set between the individual modules, based on training data. The problem of learning (and the necessary information flow) in the two distributed architectures is examined. Finally, the performance of a feature decomposition distributed model is compared with that of a single monolithic network in a bench-mark real-world pattern recognition problem to illustrate the advantages of the distributed approach.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
神经系统决策者的分布式分解体系结构
神经网络社区对采用由多个小型神经决策模块组成的系统而不是单个大型单片决策模块越来越感兴趣。受这些兴趣和大规模系统理论中分布式决策架构的其他研究的启发,我们提出了两种特征分解模型(并行和串联)来互连多个神经网络。在这两种模型中,整个特征集被划分为几个不相交的子集,以便每个子集由单独的神经网络处理。在并行互联中,决策者之间在决策过程中不存在通信,他们的输出通过组合或融合函数进行组合,从而产生整体决策。相比之下,两个网络的串联连接(为了说明目的)要求一个网络(领导者)的输出形成另一个网络(追随者)的额外输入,后者的输出决定了总体决策。提出了一种基于训练数据的特征分解算法,以确定如何在各个模块之间划分总特征集。研究了这两种分布式体系结构中的学习问题(以及必要的信息流)。最后,在一个典型的现实世界模式识别问题中,将特征分解分布式模型的性能与单个单片网络的性能进行了比较,以说明分布式方法的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A systematic and numerically efficient procedure for stable dynamic model inversion of LTI systems Controller design for improving the degree of stability of periodic solutions in forced nonlinear systems A Bayesian approach to the missing features problem in classification Stability analysis and systematic design of fuzzy controllers with simplified linear control rules Best linear unbiased estimation filters with FIR structures for state space signal models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1