{"title":"一种新的无约束优化的超记忆梯度方法","authors":"Jingyong Tanga, Li Dong","doi":"10.1109/CINC.2010.5643886","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a new super-memory gradient method for unconstrained optimization problems. The global convergence and linear convergence rate are proved under some mild conditions. The method uses the current and previous iterative information to generate a new search direction and uses Wolfe line search to define the step-size at each iteration. It has a possibly simple structure and avoids the computation and storage of some matrices, which is suitable to solve large scale optimization problems. Numerical experiments show that the new algorithm is effective in practical computation in many situations.","PeriodicalId":227004,"journal":{"name":"2010 Second International Conference on Computational Intelligence and Natural Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A new super-memory gradient method for unconstrained optimization\",\"authors\":\"Jingyong Tanga, Li Dong\",\"doi\":\"10.1109/CINC.2010.5643886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a new super-memory gradient method for unconstrained optimization problems. The global convergence and linear convergence rate are proved under some mild conditions. The method uses the current and previous iterative information to generate a new search direction and uses Wolfe line search to define the step-size at each iteration. It has a possibly simple structure and avoids the computation and storage of some matrices, which is suitable to solve large scale optimization problems. Numerical experiments show that the new algorithm is effective in practical computation in many situations.\",\"PeriodicalId\":227004,\"journal\":{\"name\":\"2010 Second International Conference on Computational Intelligence and Natural Computing\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 Second International Conference on Computational Intelligence and Natural Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CINC.2010.5643886\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 Second International Conference on Computational Intelligence and Natural Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CINC.2010.5643886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A new super-memory gradient method for unconstrained optimization
In this paper, we propose a new super-memory gradient method for unconstrained optimization problems. The global convergence and linear convergence rate are proved under some mild conditions. The method uses the current and previous iterative information to generate a new search direction and uses Wolfe line search to define the step-size at each iteration. It has a possibly simple structure and avoids the computation and storage of some matrices, which is suitable to solve large scale optimization problems. Numerical experiments show that the new algorithm is effective in practical computation in many situations.