{"title":"Distributed decomposition architectures for neural decision-makers","authors":"S. Mukhopadhyay, Haiying Wang","doi":"10.1109/CDC.1999.831326","DOIUrl":null,"url":null,"abstract":"There is a growing interest in the neural networks community to employ systems consisting of multiple small neural decision-making modules, instead of a single large monolithic one. Motivated by such interests and other studies of distributed decision architectures in large-scale systems theory, we propose two feature decomposition models (parallel and tandem) for interconnecting multiple neural networks. In both these models, the overall feature set is partitioned into several disjoint subsets so that each subset is processed by a separate neural network. In the parallel interconnection, there is no communication between the decision-makers during the decision-making process, and their outputs are combined by a combining or fusion function to generate overall decisions. In contrast, a tandem connection of two networks (for illustration purposes) requires that the outputs of one (the leader) form additional inputs of the other (the follower), and the output of the latter determines the overall decision. A feature decomposition algorithm is presented to decide how to partition the total feature set between the individual modules, based on training data. The problem of learning (and the necessary information flow) in the two distributed architectures is examined. Finally, the performance of a feature decomposition distributed model is compared with that of a single monolithic network in a bench-mark real-world pattern recognition problem to illustrate the advantages of the distributed approach.","PeriodicalId":137513,"journal":{"name":"Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.1999.831326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
There is a growing interest in the neural networks community to employ systems consisting of multiple small neural decision-making modules, instead of a single large monolithic one. Motivated by such interests and other studies of distributed decision architectures in large-scale systems theory, we propose two feature decomposition models (parallel and tandem) for interconnecting multiple neural networks. In both these models, the overall feature set is partitioned into several disjoint subsets so that each subset is processed by a separate neural network. In the parallel interconnection, there is no communication between the decision-makers during the decision-making process, and their outputs are combined by a combining or fusion function to generate overall decisions. In contrast, a tandem connection of two networks (for illustration purposes) requires that the outputs of one (the leader) form additional inputs of the other (the follower), and the output of the latter determines the overall decision. A feature decomposition algorithm is presented to decide how to partition the total feature set between the individual modules, based on training data. The problem of learning (and the necessary information flow) in the two distributed architectures is examined. Finally, the performance of a feature decomposition distributed model is compared with that of a single monolithic network in a bench-mark real-world pattern recognition problem to illustrate the advantages of the distributed approach.