{"title":"Discovery of comprehensible symbolic rules in a neural network","authors":"Stéphane Avner","doi":"10.1109/INBS.1995.404278","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce a system that extracts comprehensible symbolic rules from a multilayer perceptron. Once the network has been trained in the usual manner, the training set is presented again, and the actual activations of the units recorded. Logical rules, corresponding to the logical combinations of the incoming signals, are extracted at each activated unit. This procedure is used for all examples belonging to the training set. Thus we obtain a set of rules which account for all logical steps taken by the network to process all known input patterns. Furthermore, we show that if some symbolic meaning were associated to every input unit, then the hidden units, which have formed concepts in order to deal with recurrent features in the input data, possess some symbolic meaning tool. Our algorithm allows the recognition or the understandability of these concepts: they are found to be reducible to conjunctions and negations of the human input concepts. Our rules can also be recombined in different ways, thus constituting some limited but sound generalization of the training set. Neural networks could learn concepts about domains where little theory was known but where many examples were available. Yet, because their knowledge was stored in the synaptic strengths under numerical form, it was difficult to comprehend what they had discovered. This system therefore provides some means of accessing the information contained inside the network.<<ETX>>","PeriodicalId":423954,"journal":{"name":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings First International Symposium on Intelligence in Neural and Biological Systems. INBS'95","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INBS.1995.404278","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, we introduce a system that extracts comprehensible symbolic rules from a multilayer perceptron. Once the network has been trained in the usual manner, the training set is presented again, and the actual activations of the units recorded. Logical rules, corresponding to the logical combinations of the incoming signals, are extracted at each activated unit. This procedure is used for all examples belonging to the training set. Thus we obtain a set of rules which account for all logical steps taken by the network to process all known input patterns. Furthermore, we show that if some symbolic meaning were associated to every input unit, then the hidden units, which have formed concepts in order to deal with recurrent features in the input data, possess some symbolic meaning tool. Our algorithm allows the recognition or the understandability of these concepts: they are found to be reducible to conjunctions and negations of the human input concepts. Our rules can also be recombined in different ways, thus constituting some limited but sound generalization of the training set. Neural networks could learn concepts about domains where little theory was known but where many examples were available. Yet, because their knowledge was stored in the synaptic strengths under numerical form, it was difficult to comprehend what they had discovered. This system therefore provides some means of accessing the information contained inside the network.<>