{"title":"A functional manipulation for improving tolerance against multiple-valued weight faults of feedforward neural networks","authors":"N. Kamiura, Yasuyuki Taniguchi, N. Matsui","doi":"10.1109/ISMVL.2001.924593","DOIUrl":null,"url":null,"abstract":"In this paper we propose feedforward neural networks (NNs for short) tolerating multiple-valued stuck-at faults of connection weights. To improve the fault tolerance against faults with small false absolute values, we employ the activation function with the relatively gentle gradient for the last layer, and steepen the gradient of the function in the intermediate layer. For faults with large false absolute values, the function working as filter inhibits their influence by setting products of inputs and faulty weights to allowable values. The experimental results show that our NN is superior in fault tolerance and learning time to other NNs employing approaches based on fault injection, forcible weight limit and so forth.","PeriodicalId":297353,"journal":{"name":"Proceedings 31st IEEE International Symposium on Multiple-Valued Logic","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 31st IEEE International Symposium on Multiple-Valued Logic","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMVL.2001.924593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper we propose feedforward neural networks (NNs for short) tolerating multiple-valued stuck-at faults of connection weights. To improve the fault tolerance against faults with small false absolute values, we employ the activation function with the relatively gentle gradient for the last layer, and steepen the gradient of the function in the intermediate layer. For faults with large false absolute values, the function working as filter inhibits their influence by setting products of inputs and faulty weights to allowable values. The experimental results show that our NN is superior in fault tolerance and learning time to other NNs employing approaches based on fault injection, forcible weight limit and so forth.