{"title":"l2范数交换代价为平方的在线凸优化的最优动态遗憾","authors":"Qingsong Liu, Yaoyu Zhang","doi":"10.23919/ACC53348.2022.9867328","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate online convex optimization (OCO) with squared l2 norm switching cost, which has great applicability but very little work has been done on it. Specifically, we provide a new theoretical analysis in terms of dynamic regret and lower bounds for the case when loss functions are strongly-convex and smooth or only smooth. We show that by applying the advanced Online Multiple Gradient Descent (OMGD) and Online Optimistic Mirror Descent (OOMD) algorithms that are originally proposed for classic OCO, we can achieve state-of-the-art performance bounds for OCO with squared l2 norm switching cost. Furthermore, we show that these bounds match the lower bound.","PeriodicalId":366299,"journal":{"name":"2022 American Control Conference (ACC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Optimal Dynamic Regret for Online Convex Optimization with Squared l2 Norm Switching Cost\",\"authors\":\"Qingsong Liu, Yaoyu Zhang\",\"doi\":\"10.23919/ACC53348.2022.9867328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we investigate online convex optimization (OCO) with squared l2 norm switching cost, which has great applicability but very little work has been done on it. Specifically, we provide a new theoretical analysis in terms of dynamic regret and lower bounds for the case when loss functions are strongly-convex and smooth or only smooth. We show that by applying the advanced Online Multiple Gradient Descent (OMGD) and Online Optimistic Mirror Descent (OOMD) algorithms that are originally proposed for classic OCO, we can achieve state-of-the-art performance bounds for OCO with squared l2 norm switching cost. Furthermore, we show that these bounds match the lower bound.\",\"PeriodicalId\":366299,\"journal\":{\"name\":\"2022 American Control Conference (ACC)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 American Control Conference (ACC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ACC53348.2022.9867328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC53348.2022.9867328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimal Dynamic Regret for Online Convex Optimization with Squared l2 Norm Switching Cost
In this paper, we investigate online convex optimization (OCO) with squared l2 norm switching cost, which has great applicability but very little work has been done on it. Specifically, we provide a new theoretical analysis in terms of dynamic regret and lower bounds for the case when loss functions are strongly-convex and smooth or only smooth. We show that by applying the advanced Online Multiple Gradient Descent (OMGD) and Online Optimistic Mirror Descent (OOMD) algorithms that are originally proposed for classic OCO, we can achieve state-of-the-art performance bounds for OCO with squared l2 norm switching cost. Furthermore, we show that these bounds match the lower bound.