{"title":"Deep belief network with fuzzy parameters and its membership function sensitivity analysis","authors":"","doi":"10.1016/j.neucom.2024.128716","DOIUrl":null,"url":null,"abstract":"<div><div>Over the last few years, deep belief networks (DBNs) have been extensively utilized for efficient and reliable performance in several complex systems. One critical factor contributing to the enhanced learning of the DBN layers is the handling of network parameters, such as weights and biases. The efficient training of these parameters significantly influences the overall enhanced performance of the DBN. However, the initialization of these parameters is often random, and the data samples are normally corrupted by unwanted noise. This causes the uncertainty to arise among weights and biases of the DBNs, which ultimately hinders the performance of the network. To address this challenge, we propose a novel DBN model with weights and biases represented using fuzzy sets. The approach systematically handles inherent uncertainties in parameters resulting in a more robust and reliable training process. We show the working of the proposed algorithm considering four widely used benchmark datasets such as: MNSIT, n-MNIST (MNIST with additive white Gaussian noise (AWGN) and MNIST with motion blur) and CIFAR-10. The experimental results show superiority of the proposed approach as compared to classical DBN in terms of robustness and enhanced performance. Moreover, it has the capability to produce equivalent results with a smaller number of nodes in the hidden layer; thus, reducing the computational complexity of the network architecture. Additionally, we also study the sensitivity analysis for stability and consistency by considering different membership functions to model the uncertain weights and biases. Further, we establish the statistical significance of the obtained results by conducting both one-way and Kruskal-Wallis analyses of variance tests.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014875","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Over the last few years, deep belief networks (DBNs) have been extensively utilized for efficient and reliable performance in several complex systems. One critical factor contributing to the enhanced learning of the DBN layers is the handling of network parameters, such as weights and biases. The efficient training of these parameters significantly influences the overall enhanced performance of the DBN. However, the initialization of these parameters is often random, and the data samples are normally corrupted by unwanted noise. This causes the uncertainty to arise among weights and biases of the DBNs, which ultimately hinders the performance of the network. To address this challenge, we propose a novel DBN model with weights and biases represented using fuzzy sets. The approach systematically handles inherent uncertainties in parameters resulting in a more robust and reliable training process. We show the working of the proposed algorithm considering four widely used benchmark datasets such as: MNSIT, n-MNIST (MNIST with additive white Gaussian noise (AWGN) and MNIST with motion blur) and CIFAR-10. The experimental results show superiority of the proposed approach as compared to classical DBN in terms of robustness and enhanced performance. Moreover, it has the capability to produce equivalent results with a smaller number of nodes in the hidden layer; thus, reducing the computational complexity of the network architecture. Additionally, we also study the sensitivity analysis for stability and consistency by considering different membership functions to model the uncertain weights and biases. Further, we establish the statistical significance of the obtained results by conducting both one-way and Kruskal-Wallis analyses of variance tests.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.