{"title":"Broad Autoencoder Features Learning for Classification Problem","authors":"Ting Wang, Wing W. Y. Ng, Wendi Li, S. Kwong","doi":"10.4018/IJCINI.20211001.OA23","DOIUrl":null,"url":null,"abstract":"Activation functions such as tanh and sigmoid functions are widely used in deep neural networks (DNNs) and pattern classification problems. To take advantage of different activation functions, this work proposes the broad autoencoder features (BAF). The BAF consists of four parallel-connected stacked autoencoders (SAEs), and each of them uses a different activation function, including sigmoid, tanh, relu, and softplus. The final learned features can merge by various nonlinear mappings from original input features with such a broad setting. It not only helps to excavate more information from the original input features through utilizing different activation functions, but also provides information diversity and increases the number of input nodes for classifier by parallel-connected strategy. Experimental results show that the BAF yields better-learned features and classification performances.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/IJCINI.20211001.OA23","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Activation functions such as tanh and sigmoid functions are widely used in deep neural networks (DNNs) and pattern classification problems. To take advantage of different activation functions, this work proposes the broad autoencoder features (BAF). The BAF consists of four parallel-connected stacked autoencoders (SAEs), and each of them uses a different activation function, including sigmoid, tanh, relu, and softplus. The final learned features can merge by various nonlinear mappings from original input features with such a broad setting. It not only helps to excavate more information from the original input features through utilizing different activation functions, but also provides information diversity and increases the number of input nodes for classifier by parallel-connected strategy. Experimental results show that the BAF yields better-learned features and classification performances.