{"title":"Why tanh: choosing a sigmoidal function","authors":"B. Kalman, S. Kwasny","doi":"10.1109/IJCNN.1992.227257","DOIUrl":null,"url":null,"abstract":"As hardware implementations of backpropagation and related training algorithms are anticipated, the choice of a sigmoidal function should be carefully justified. Attention should focus on choosing an activation function in a neural unit that exhibits the best properties for training. The author argues for the use of the hyperbolic tangent. While the exact shape of the sigmoidal makes little difference once the network is trained, it is shown that it possesses particular properties that make it appealing for use while training. By paying attention to scaling it is illustrated that tanh (1.5*) has the additional advantage of equalizing training over layers. This result can easily generalize to several standard sigmoidal functions commonly in use.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"209","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1992.227257","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 209
Abstract
As hardware implementations of backpropagation and related training algorithms are anticipated, the choice of a sigmoidal function should be carefully justified. Attention should focus on choosing an activation function in a neural unit that exhibits the best properties for training. The author argues for the use of the hyperbolic tangent. While the exact shape of the sigmoidal makes little difference once the network is trained, it is shown that it possesses particular properties that make it appealing for use while training. By paying attention to scaling it is illustrated that tanh (1.5*) has the additional advantage of equalizing training over layers. This result can easily generalize to several standard sigmoidal functions commonly in use.<>