{"title":"Training neurofuzzy systems","authors":"D.J Mills, M Brown, C.J Harris","doi":"10.1016/0066-4138(94)90064-7","DOIUrl":null,"url":null,"abstract":"<div><p>A neurofuzzy system combines the positive attributes of a neural network and a fuzzy system by providing a transparent framework for representing linguistic rules with well defined modelling and learning characteristics. Unfortunately, their application is limited to problems involving a small number of input variables by the <em>curse of dimensionality</em> where the the size of the rule base and the training set increase as an exponential function of the input dimension. The curse can be alleviated by a number of approaches but one which has recently received much attention is the exploitation of <em>redundancy</em>. Many functions can be adequately approximated by an <em>additive</em> model whose output is a sum over several smaller dimensional subrnodels. This technique is called <em>global partitioning</em> and the aim of an algorithm designed to construct the approximation is to automatically determine the number of submodels and the subset of input variables for each submodel. The construction algorithm is an iterative process where each iteration must identify a set of candidate refinements and evaluate the associated candidate models. This leads naturally to the problem of how to train the candidate models and the approach taken depends on whether they contain one or multiple submodels.</p></div>","PeriodicalId":100097,"journal":{"name":"Annual Review in Automatic Programming","volume":"19 ","pages":"Pages 191-196"},"PeriodicalIF":0.0000,"publicationDate":"1994-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0066-4138(94)90064-7","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Review in Automatic Programming","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/0066413894900647","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A neurofuzzy system combines the positive attributes of a neural network and a fuzzy system by providing a transparent framework for representing linguistic rules with well defined modelling and learning characteristics. Unfortunately, their application is limited to problems involving a small number of input variables by the curse of dimensionality where the the size of the rule base and the training set increase as an exponential function of the input dimension. The curse can be alleviated by a number of approaches but one which has recently received much attention is the exploitation of redundancy. Many functions can be adequately approximated by an additive model whose output is a sum over several smaller dimensional subrnodels. This technique is called global partitioning and the aim of an algorithm designed to construct the approximation is to automatically determine the number of submodels and the subset of input variables for each submodel. The construction algorithm is an iterative process where each iteration must identify a set of candidate refinements and evaluate the associated candidate models. This leads naturally to the problem of how to train the candidate models and the approach taken depends on whether they contain one or multiple submodels.