{"title":"带 TV 惩罚的局部自适应稀疏加性量化回归模型","authors":"Yue Wang , Hongmei Lin , Zengyan Fan , Heng Lian","doi":"10.1016/j.jspi.2024.106144","DOIUrl":null,"url":null,"abstract":"<div><p><span>High-dimensional additive quantile regression<span> model via penalization provides a powerful tool for analyzing complex data in many contemporary applications. Despite the fast developments, how to combine the strengths of additive quantile regression with total variation penalty with theoretical guarantees still remains unexplored. In this paper, we propose a new methodology for sparse additive quantile regression model over bounded variation function classes via the empirical norm penalty and the total variation penalty for local adaptivity. Theoretically, we prove that the proposed method achieves the optimal convergence rate under mild assumptions. Moreover, an </span></span>alternating direction method of multipliers (ADMM) based algorithm is developed. Both simulation results and real data analysis confirm the effectiveness of our method.</p></div>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Locally adaptive sparse additive quantile regression model with TV penalty\",\"authors\":\"Yue Wang , Hongmei Lin , Zengyan Fan , Heng Lian\",\"doi\":\"10.1016/j.jspi.2024.106144\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>High-dimensional additive quantile regression<span> model via penalization provides a powerful tool for analyzing complex data in many contemporary applications. Despite the fast developments, how to combine the strengths of additive quantile regression with total variation penalty with theoretical guarantees still remains unexplored. In this paper, we propose a new methodology for sparse additive quantile regression model over bounded variation function classes via the empirical norm penalty and the total variation penalty for local adaptivity. Theoretically, we prove that the proposed method achieves the optimal convergence rate under mild assumptions. Moreover, an </span></span>alternating direction method of multipliers (ADMM) based algorithm is developed. Both simulation results and real data analysis confirm the effectiveness of our method.</p></div>\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0,\"publicationDate\":\"2024-01-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0378375824000016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378375824000016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Locally adaptive sparse additive quantile regression model with TV penalty
High-dimensional additive quantile regression model via penalization provides a powerful tool for analyzing complex data in many contemporary applications. Despite the fast developments, how to combine the strengths of additive quantile regression with total variation penalty with theoretical guarantees still remains unexplored. In this paper, we propose a new methodology for sparse additive quantile regression model over bounded variation function classes via the empirical norm penalty and the total variation penalty for local adaptivity. Theoretically, we prove that the proposed method achieves the optimal convergence rate under mild assumptions. Moreover, an alternating direction method of multipliers (ADMM) based algorithm is developed. Both simulation results and real data analysis confirm the effectiveness of our method.