{"title":"基于势函数加速的近端梯度映射及其范数最小化","authors":"Chen Beier, Zhang Hui","doi":"10.61208/pjo-2023-035","DOIUrl":null,"url":null,"abstract":"The proximal gradient descent method, well-known for composite optimization, can be completely described by the concept of proximal gradient mapping. In this paper, we highlight our previous two discoveries of proximal gradient mapping--norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.","PeriodicalId":49716,"journal":{"name":"Pacific Journal of Optimization","volume":"46 1","pages":"0"},"PeriodicalIF":0.4000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On proximal gradient mapping and its minimization in norm via potential function-based acceleration\",\"authors\":\"Chen Beier, Zhang Hui\",\"doi\":\"10.61208/pjo-2023-035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The proximal gradient descent method, well-known for composite optimization, can be completely described by the concept of proximal gradient mapping. In this paper, we highlight our previous two discoveries of proximal gradient mapping--norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.\",\"PeriodicalId\":49716,\"journal\":{\"name\":\"Pacific Journal of Optimization\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.4000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pacific Journal of Optimization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.61208/pjo-2023-035\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pacific Journal of Optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.61208/pjo-2023-035","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
On proximal gradient mapping and its minimization in norm via potential function-based acceleration
The proximal gradient descent method, well-known for composite optimization, can be completely described by the concept of proximal gradient mapping. In this paper, we highlight our previous two discoveries of proximal gradient mapping--norm monotonicity and refined descent, with which we are able to extend the recently proposed potential function-based framework from gradient descent to proximal gradient descent.