{"title":"随机对角线近似最大下降的径向效应","authors":"H. Tan, K. Lim, H. Harno","doi":"10.1109/ICSIPA.2017.8120611","DOIUrl":null,"url":null,"abstract":"Stochastic Diagonal Approximate Greatest Descent (SDAGD) is proposed to manage the optimization in two stages, (a) apply a radial boundary to estimate step length when the weights are far from solution, (b) apply Newton method when the weights are within the solution level set. This is inspired by a multi-stage decision control system where different strategies is used at different conditions. In numerical optimization context, larger steps should be taken at the beginning of optimization and gradually reduced when it is near to the minimum point. Nevertheless, the intuition of determining the radial boundary when the optimized parameters are far from the solution is yet to be investigated for high dimensional data. Radial step length in SDAGD manipulates the relative step length for iteration construction. SDAGD is implemented in a two layer Multilayer Perceptron to evaluate the effects of R on artificial neural networks. It is concluded that the greater the value of R, the higher the learning rate of SDAGD algorithm when the value of R is constrained in between 100 to 10,000.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Radial effect in stochastic diagonal approximate greatest descent\",\"authors\":\"H. Tan, K. Lim, H. Harno\",\"doi\":\"10.1109/ICSIPA.2017.8120611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stochastic Diagonal Approximate Greatest Descent (SDAGD) is proposed to manage the optimization in two stages, (a) apply a radial boundary to estimate step length when the weights are far from solution, (b) apply Newton method when the weights are within the solution level set. This is inspired by a multi-stage decision control system where different strategies is used at different conditions. In numerical optimization context, larger steps should be taken at the beginning of optimization and gradually reduced when it is near to the minimum point. Nevertheless, the intuition of determining the radial boundary when the optimized parameters are far from the solution is yet to be investigated for high dimensional data. Radial step length in SDAGD manipulates the relative step length for iteration construction. SDAGD is implemented in a two layer Multilayer Perceptron to evaluate the effects of R on artificial neural networks. It is concluded that the greater the value of R, the higher the learning rate of SDAGD algorithm when the value of R is constrained in between 100 to 10,000.\",\"PeriodicalId\":268112,\"journal\":{\"name\":\"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSIPA.2017.8120611\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIPA.2017.8120611","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Radial effect in stochastic diagonal approximate greatest descent
Stochastic Diagonal Approximate Greatest Descent (SDAGD) is proposed to manage the optimization in two stages, (a) apply a radial boundary to estimate step length when the weights are far from solution, (b) apply Newton method when the weights are within the solution level set. This is inspired by a multi-stage decision control system where different strategies is used at different conditions. In numerical optimization context, larger steps should be taken at the beginning of optimization and gradually reduced when it is near to the minimum point. Nevertheless, the intuition of determining the radial boundary when the optimized parameters are far from the solution is yet to be investigated for high dimensional data. Radial step length in SDAGD manipulates the relative step length for iteration construction. SDAGD is implemented in a two layer Multilayer Perceptron to evaluate the effects of R on artificial neural networks. It is concluded that the greater the value of R, the higher the learning rate of SDAGD algorithm when the value of R is constrained in between 100 to 10,000.