Bin Wang, Tianrui Li, Yanyong Huang, Huaishao Luo, Dongming Guo, S. Horng
{"title":"深度学习中的多种激活函数","authors":"Bin Wang, Tianrui Li, Yanyong Huang, Huaishao Luo, Dongming Guo, S. Horng","doi":"10.1109/ISKE.2017.8258768","DOIUrl":null,"url":null,"abstract":"We introduce the concept of diverse activation functions, and apply them into Convolutional Auto-Encoder (CAE) to develop diverse activation CAE (DaCAE), which considerably reduces the reconstruction loss. In contrast to vanilla CAE only with activation functions of the same types, DaCAE incorporates diverse activations by considering their cooperation and location. In terms of the reconstruction capability, DaCAE significantly outperforms vanilla CAE and full connected Auto-Encoder, and we conclude rules of thumb on designing diverse activations networks. Based on the high quality of the latent bottleneck features extracted from DaCAE, we demonstrate a satisfying advantage that fuzzy rules classifier performs better than softmax layer in supervised learning. These results could be seen as new research points in the attempts at using diverse activations to train deep neural networks and combining fuzzy inference systems with deep learning.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Diverse activation functions in deep learning\",\"authors\":\"Bin Wang, Tianrui Li, Yanyong Huang, Huaishao Luo, Dongming Guo, S. Horng\",\"doi\":\"10.1109/ISKE.2017.8258768\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce the concept of diverse activation functions, and apply them into Convolutional Auto-Encoder (CAE) to develop diverse activation CAE (DaCAE), which considerably reduces the reconstruction loss. In contrast to vanilla CAE only with activation functions of the same types, DaCAE incorporates diverse activations by considering their cooperation and location. In terms of the reconstruction capability, DaCAE significantly outperforms vanilla CAE and full connected Auto-Encoder, and we conclude rules of thumb on designing diverse activations networks. Based on the high quality of the latent bottleneck features extracted from DaCAE, we demonstrate a satisfying advantage that fuzzy rules classifier performs better than softmax layer in supervised learning. These results could be seen as new research points in the attempts at using diverse activations to train deep neural networks and combining fuzzy inference systems with deep learning.\",\"PeriodicalId\":208009,\"journal\":{\"name\":\"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISKE.2017.8258768\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISKE.2017.8258768","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We introduce the concept of diverse activation functions, and apply them into Convolutional Auto-Encoder (CAE) to develop diverse activation CAE (DaCAE), which considerably reduces the reconstruction loss. In contrast to vanilla CAE only with activation functions of the same types, DaCAE incorporates diverse activations by considering their cooperation and location. In terms of the reconstruction capability, DaCAE significantly outperforms vanilla CAE and full connected Auto-Encoder, and we conclude rules of thumb on designing diverse activations networks. Based on the high quality of the latent bottleneck features extracted from DaCAE, we demonstrate a satisfying advantage that fuzzy rules classifier performs better than softmax layer in supervised learning. These results could be seen as new research points in the attempts at using diverse activations to train deep neural networks and combining fuzzy inference systems with deep learning.