{"title":"基于种群的随机梯度估计的无导数优化","authors":"Azhar Khayrattee, G. Anagnostopoulos","doi":"10.1145/2576768.2598365","DOIUrl":null,"url":null,"abstract":"In this paper we introduce a derivative-free optimization method that is derived from a population based stochastic gradient estimator. We first demonstrate some properties of this estimator and show how it is expected to always yield a descent direction. We analytically show that the difference between the expected function value and the optimum decreases exponentially for strongly convex functions and the expected distance between the current point and the optimum has an upper bound. Then we experimentally tune the parameters of our algorithm to get the best performance. Finally, we use the Black-Box-Optimization-Benchmarking test function suite to evaluate the performance of the algorithm. The experiments indicate that the method offer notable performance advantages especially, when applied to objective functions that are ill-conditioned and potentially multi-modal. This result, coupled with the low computational cost when compared to Quasi-Newton methods, makes it quite attractive.","PeriodicalId":123241,"journal":{"name":"Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Derivative free optimization using a population-based stochastic gradient estimator\",\"authors\":\"Azhar Khayrattee, G. Anagnostopoulos\",\"doi\":\"10.1145/2576768.2598365\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we introduce a derivative-free optimization method that is derived from a population based stochastic gradient estimator. We first demonstrate some properties of this estimator and show how it is expected to always yield a descent direction. We analytically show that the difference between the expected function value and the optimum decreases exponentially for strongly convex functions and the expected distance between the current point and the optimum has an upper bound. Then we experimentally tune the parameters of our algorithm to get the best performance. Finally, we use the Black-Box-Optimization-Benchmarking test function suite to evaluate the performance of the algorithm. The experiments indicate that the method offer notable performance advantages especially, when applied to objective functions that are ill-conditioned and potentially multi-modal. This result, coupled with the low computational cost when compared to Quasi-Newton methods, makes it quite attractive.\",\"PeriodicalId\":123241,\"journal\":{\"name\":\"Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation\",\"volume\":\"79 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2576768.2598365\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2576768.2598365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Derivative free optimization using a population-based stochastic gradient estimator
In this paper we introduce a derivative-free optimization method that is derived from a population based stochastic gradient estimator. We first demonstrate some properties of this estimator and show how it is expected to always yield a descent direction. We analytically show that the difference between the expected function value and the optimum decreases exponentially for strongly convex functions and the expected distance between the current point and the optimum has an upper bound. Then we experimentally tune the parameters of our algorithm to get the best performance. Finally, we use the Black-Box-Optimization-Benchmarking test function suite to evaluate the performance of the algorithm. The experiments indicate that the method offer notable performance advantages especially, when applied to objective functions that are ill-conditioned and potentially multi-modal. This result, coupled with the low computational cost when compared to Quasi-Newton methods, makes it quite attractive.