{"title":"广义自编码器特征学习分类问题","authors":"Ting Wang, Wing W. Y. Ng, Wendi Li, S. Kwong","doi":"10.4018/IJCINI.20211001.OA23","DOIUrl":null,"url":null,"abstract":"Activation functions such as tanh and sigmoid functions are widely used in deep neural networks (DNNs) and pattern classification problems. To take advantage of different activation functions, this work proposes the broad autoencoder features (BAF). The BAF consists of four parallel-connected stacked autoencoders (SAEs), and each of them uses a different activation function, including sigmoid, tanh, relu, and softplus. The final learned features can merge by various nonlinear mappings from original input features with such a broad setting. It not only helps to excavate more information from the original input features through utilizing different activation functions, but also provides information diversity and increases the number of input nodes for classifier by parallel-connected strategy. Experimental results show that the BAF yields better-learned features and classification performances.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":"6 1","pages":"1-15"},"PeriodicalIF":0.6000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Broad Autoencoder Features Learning for Classification Problem\",\"authors\":\"Ting Wang, Wing W. Y. Ng, Wendi Li, S. Kwong\",\"doi\":\"10.4018/IJCINI.20211001.OA23\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Activation functions such as tanh and sigmoid functions are widely used in deep neural networks (DNNs) and pattern classification problems. To take advantage of different activation functions, this work proposes the broad autoencoder features (BAF). The BAF consists of four parallel-connected stacked autoencoders (SAEs), and each of them uses a different activation function, including sigmoid, tanh, relu, and softplus. The final learned features can merge by various nonlinear mappings from original input features with such a broad setting. It not only helps to excavate more information from the original input features through utilizing different activation functions, but also provides information diversity and increases the number of input nodes for classifier by parallel-connected strategy. Experimental results show that the BAF yields better-learned features and classification performances.\",\"PeriodicalId\":43637,\"journal\":{\"name\":\"International Journal of Cognitive Informatics and Natural Intelligence\",\"volume\":\"6 1\",\"pages\":\"1-15\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Cognitive Informatics and Natural Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4018/IJCINI.20211001.OA23\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Cognitive Informatics and Natural Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/IJCINI.20211001.OA23","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Broad Autoencoder Features Learning for Classification Problem
Activation functions such as tanh and sigmoid functions are widely used in deep neural networks (DNNs) and pattern classification problems. To take advantage of different activation functions, this work proposes the broad autoencoder features (BAF). The BAF consists of four parallel-connected stacked autoencoders (SAEs), and each of them uses a different activation function, including sigmoid, tanh, relu, and softplus. The final learned features can merge by various nonlinear mappings from original input features with such a broad setting. It not only helps to excavate more information from the original input features through utilizing different activation functions, but also provides information diversity and increases the number of input nodes for classifier by parallel-connected strategy. Experimental results show that the BAF yields better-learned features and classification performances.
期刊介绍:
The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) encourages submissions that transcends disciplinary boundaries, and is devoted to rapid publication of high quality papers. The themes of IJCINI are natural intelligence, autonomic computing, and neuroinformatics. IJCINI is expected to provide the first forum and platform in the world for researchers, practitioners, and graduate students to investigate cognitive mechanisms and processes of human information processing, and to stimulate the transdisciplinary effort on cognitive informatics and natural intelligent research and engineering applications.