首页 > 最新文献

Acta Mathematica Sinica-English Series最新文献

英文 中文
Preface of the Special Issue on Statistics
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-20 DOI: 10.1007/s10114-025-4551-1
Zhiming Ma, Fuzhou Gong, Liuquan Sun
{"title":"Preface of the Special Issue on Statistics","authors":"Zhiming Ma, Fuzhou Gong, Liuquan Sun","doi":"10.1007/s10114-025-4551-1","DOIUrl":"10.1007/s10114-025-4551-1","url":null,"abstract":"","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"497 - 497"},"PeriodicalIF":0.8,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reproducible Learning of Gaussian Graphical Models via Graphical Lasso Multiple Data Splitting
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3324-1
Kang Hu, Danning Li, Binghui Liu

Gaussian graphical models (GGMs) are widely used as intuitive and efficient tools for data analysis in several application domains. To address the reproducibility issue of structure learning of a GGM, it is essential to control the false discovery rate (FDR) of the estimated edge set of the graph in terms of the graphical model. Hence, in recent years, the problem of GGM estimation with FDR control is receiving more and more attention. In this paper, we propose a new GGM estimation method by implementing multiple data splitting. Instead of using the node-by-node regressions to estimate each row of the precision matrix, we suggest directly estimating the entire precision matrix using the graphical Lasso in the multiple data splitting, and our calculation speed is p times faster than the previous. We show that the proposed method can asymptotically control FDR, and the proposed method has significant advantages in computational efficiency. Finally, we demonstrate the usefulness of the proposed method through a real data analysis.

{"title":"Reproducible Learning of Gaussian Graphical Models via Graphical Lasso Multiple Data Splitting","authors":"Kang Hu,&nbsp;Danning Li,&nbsp;Binghui Liu","doi":"10.1007/s10114-025-3324-1","DOIUrl":"10.1007/s10114-025-3324-1","url":null,"abstract":"<div><p>Gaussian graphical models (GGMs) are widely used as intuitive and efficient tools for data analysis in several application domains. To address the reproducibility issue of structure learning of a GGM, it is essential to control the false discovery rate (FDR) of the estimated edge set of the graph in terms of the graphical model. Hence, in recent years, the problem of GGM estimation with FDR control is receiving more and more attention. In this paper, we propose a new GGM estimation method by implementing multiple data splitting. Instead of using the node-by-node regressions to estimate each row of the precision matrix, we suggest directly estimating the entire precision matrix using the graphical Lasso in the multiple data splitting, and our calculation speed is <i>p</i> times faster than the previous. We show that the proposed method can asymptotically control FDR, and the proposed method has significant advantages in computational efficiency. Finally, we demonstrate the usefulness of the proposed method through a real data analysis.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"553 - 568"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variational Bayesian Tensor Quantile Regression
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3390-4
Yunzhi Jin, Yanqing Zhang

Quantile regression is widely used in variable relationship research for statistical learning. Traditional quantile regression model is based on vector-valued covariates and can be efficiently estimated via traditional estimation methods. However, many modern applications involve tensor data with the intrinsic tensor structure. Traditional quantile regression can not deal with tensor regression issues well. To this end, we consider a tensor quantile regression with tensor-valued covariates and develop a novel variational Bayesian estimation approach to make estimation and prediction based on the asymmetric Laplace model and the CANDECOMP/PARAFAC decomposition of tensor coefficients. To incorporate the sparsity of tensor coefficients, we consider the multiway shrinkage priors for marginal factor vectors of tensor coefficients. The key idea of the proposed method is to efficiently combine the prior structural information of tensor and utilize the matricization of tensor decomposition to simplify the complexity of tensor coefficient estimation. The coordinate ascent algorithm is employed to optimize variational lower bound. Simulation studies and a real example show the numerical performances of the proposed method.

{"title":"Variational Bayesian Tensor Quantile Regression","authors":"Yunzhi Jin,&nbsp;Yanqing Zhang","doi":"10.1007/s10114-025-3390-4","DOIUrl":"10.1007/s10114-025-3390-4","url":null,"abstract":"<div><p>Quantile regression is widely used in variable relationship research for statistical learning. Traditional quantile regression model is based on vector-valued covariates and can be efficiently estimated via traditional estimation methods. However, many modern applications involve tensor data with the intrinsic tensor structure. Traditional quantile regression can not deal with tensor regression issues well. To this end, we consider a tensor quantile regression with tensor-valued covariates and develop a novel variational Bayesian estimation approach to make estimation and prediction based on the asymmetric Laplace model and the CANDECOMP/PARAFAC decomposition of tensor coefficients. To incorporate the sparsity of tensor coefficients, we consider the multiway shrinkage priors for marginal factor vectors of tensor coefficients. The key idea of the proposed method is to efficiently combine the prior structural information of tensor and utilize the matricization of tensor decomposition to simplify the complexity of tensor coefficient estimation. The coordinate ascent algorithm is employed to optimize variational lower bound. Simulation studies and a real example show the numerical performances of the proposed method.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"733 - 756"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kaiser Criterion in Factor Models
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3383-3
Changhu Wang, Jianhua Guo, Yanyuan Ma, Shurong Zheng

Despite of the wide use of the factor models, the issue of determining the number of factors has not been resolved in the statistics literature. An ad hoc approach is to set the number of factors to be the number of eigenvalues of the data correlation matrix that are larger than one, and subsequent statistical analysis proceeds assuming the resulting factor number is correct. In this work, we study the relation between the number of such eigenvalues and the number of factors, and provide the if and only if conditions under which the two numbers are equal. We show that the equality only relies on the properties of the loading matrix of the factor model. Guided by the newly discovered condition, we further reveal how the model error affects the estimation of the number of factors.

{"title":"Kaiser Criterion in Factor Models","authors":"Changhu Wang,&nbsp;Jianhua Guo,&nbsp;Yanyuan Ma,&nbsp;Shurong Zheng","doi":"10.1007/s10114-025-3383-3","DOIUrl":"10.1007/s10114-025-3383-3","url":null,"abstract":"<div><p>Despite of the wide use of the factor models, the issue of determining the number of factors has not been resolved in the statistics literature. An ad hoc approach is to set the number of factors to be the number of eigenvalues of the data correlation matrix that are larger than one, and subsequent statistical analysis proceeds assuming the resulting factor number is correct. In this work, we study the relation between the number of such eigenvalues and the number of factors, and provide the if and only if conditions under which the two numbers are equal. We show that the equality only relies on the properties of the loading matrix of the factor model. Guided by the newly discovered condition, we further reveal how the model error affects the estimation of the number of factors.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"547 - 552"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference for High-Dimensional Streamed Longitudinal Data
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3305-4
Senyuan Zheng, Ling Zhou

With the advent of modern devices, such as smartphones and wearable devices, high-dimensional data are collected on many participants for a period of time or even in perpetuity. For this type of data, dependencies between and within data batches exist because data are collected from the same individual over time. Under the framework of streamed data, individual historical data are not available due to the storage and computation burden. It is urgent to develop computationally efficient methods with statistical guarantees to analyze high-dimensional streamed data and make reliable inferences in practice. In addition, the homogeneity assumption on the model parameters may not be valid in practice over time. To address the above issues, in this paper, we develop a new renewable debiased-lasso inference method for high-dimensional streamed data allowing dependences between and within data batches to exist and model parameters to gradually change. We establish the large sample properties of the proposed estimators, including consistency and asymptotic normality. The numerical results, including simulations and real data analysis, show the superior performance of the proposed method.

{"title":"Inference for High-Dimensional Streamed Longitudinal Data","authors":"Senyuan Zheng,&nbsp;Ling Zhou","doi":"10.1007/s10114-025-3305-4","DOIUrl":"10.1007/s10114-025-3305-4","url":null,"abstract":"<div><p>With the advent of modern devices, such as smartphones and wearable devices, high-dimensional data are collected on many participants for a period of time or even in perpetuity. For this type of data, dependencies between and within data batches exist because data are collected from the same individual over time. Under the framework of streamed data, individual historical data are not available due to the storage and computation burden. It is urgent to develop computationally efficient methods with statistical guarantees to analyze high-dimensional streamed data and make reliable inferences in practice. In addition, the homogeneity assumption on the model parameters may not be valid in practice over time. To address the above issues, in this paper, we develop a new renewable debiased-lasso inference method for high-dimensional streamed data allowing dependences between and within data batches to exist and model parameters to gradually change. We establish the large sample properties of the proposed estimators, including consistency and asymptotic normality. The numerical results, including simulations and real data analysis, show the superior performance of the proposed method.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"757 - 779"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Estimation of Single-index Models with Deep ReQU Neural Networks
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3335-y
Zhihuang Yang, Siming Zheng, Niansheng Tang

Single-index model offers the greater flexibility of modelling than generalized linear models and also retains the interpretability of the model to some extent. Although many standard approaches such as kernels or penalized/smooothing splines were proposed to estimate smooth link function, they cannot approximate complicated unknown link functions together with the corresponding derivatives effectively due to their poor approximation ability for a finite sample size. To alleviate this problem, this paper proposes a semiparametric least squares estimation approach for a single-index model using the rectifier quadratic unit (ReQU) activated deep neural networks, called deep semiparametric least squares (DSLS) estimation method. Under some regularity conditions, we show non-asymptotic properties of the proposed DSLS estimator, and evidence that the index coefficient estimator can achieve the semiparametric efficiency. In particular, we obtain the consistency and the convergence rate of the proposed DSLS estimator when response variable is conditionally sub-exponential. This is an attempt to incorporate deep learning technique into semiparametrically efficient estimation in a single index model. Several simulation studies and a real example data analysis are conducted to illustrate the proposed DSLS estimator.

{"title":"Efficient Estimation of Single-index Models with Deep ReQU Neural Networks","authors":"Zhihuang Yang,&nbsp;Siming Zheng,&nbsp;Niansheng Tang","doi":"10.1007/s10114-025-3335-y","DOIUrl":"10.1007/s10114-025-3335-y","url":null,"abstract":"<div><p>Single-index model offers the greater flexibility of modelling than generalized linear models and also retains the interpretability of the model to some extent. Although many standard approaches such as kernels or penalized/smooothing splines were proposed to estimate smooth link function, they cannot approximate complicated unknown link functions together with the corresponding derivatives effectively due to their poor approximation ability for a finite sample size. To alleviate this problem, this paper proposes a semiparametric least squares estimation approach for a single-index model using the rectifier quadratic unit (ReQU) activated deep neural networks, called deep semiparametric least squares (DSLS) estimation method. Under some regularity conditions, we show non-asymptotic properties of the proposed DSLS estimator, and evidence that the index coefficient estimator can achieve the semiparametric efficiency. In particular, we obtain the consistency and the convergence rate of the proposed DSLS estimator when response variable is conditionally sub-exponential. This is an attempt to incorporate deep learning technique into semiparametrically efficient estimation in a single index model. Several simulation studies and a real example data analysis are conducted to illustrate the proposed DSLS estimator.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"640 - 676"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Feature Transformation and Selection Method to Acquire an Interpretable Model Incorporating Nonlinear Effects
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3329-9
Yu Zheng, Jin Zhu, Junxian Zhu, Xueqin Wang

Finding a highly interpretable nonlinear model has been an important yet challenging problem, and related research is relatively scarce in the current literature. To tackle this issue, we propose a new algorithm called Feat-ABESS based on a framework that utilizes feature transformation and selection for re-interpreting many machine learning algorithms. The core idea behind Feat-ABESS is to parameterize interpretable feature transformation within this framework and construct an objective function based on these parameters. This approach enables us to identify a proper interpretable feature transformation from the optimization perspective. By leveraging a recently advanced optimization technique, Feat-ABESS can obtain a concise and interpretable model. Moreover, Feat-ABESS can perform nonlinear variable selection. Our extensive experiments on 205 benchmark datasets and case studies on two datasets have demonstrated that Feat-ABESS can achieve powerful prediction accuracy while maintaining a high level of interpretability. The comparison with existing nonlinear variable selection methods exhibits Feat-ABESS has a higher true positive rate and a lower false discovery rate.

{"title":"A Feature Transformation and Selection Method to Acquire an Interpretable Model Incorporating Nonlinear Effects","authors":"Yu Zheng,&nbsp;Jin Zhu,&nbsp;Junxian Zhu,&nbsp;Xueqin Wang","doi":"10.1007/s10114-025-3329-9","DOIUrl":"10.1007/s10114-025-3329-9","url":null,"abstract":"<div><p>Finding a highly interpretable nonlinear model has been an important yet challenging problem, and related research is relatively scarce in the current literature. To tackle this issue, we propose a new algorithm called Feat-ABESS based on a framework that utilizes feature transformation and selection for re-interpreting many machine learning algorithms. The core idea behind Feat-ABESS is to parameterize interpretable feature transformation within this framework and construct an objective function based on these parameters. This approach enables us to identify a proper interpretable feature transformation from the optimization perspective. By leveraging a recently advanced optimization technique, Feat-ABESS can obtain a concise and interpretable model. Moreover, Feat-ABESS can perform nonlinear variable selection. Our extensive experiments on 205 benchmark datasets and case studies on two datasets have demonstrated that Feat-ABESS can achieve powerful prediction accuracy while maintaining a high level of interpretability. The comparison with existing nonlinear variable selection methods exhibits Feat-ABESS has a higher true positive rate and a lower false discovery rate.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"703 - 732"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Mallows Model Averaging for Ridge Regressions
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3409-x
Haili Zhang, Alan T. K. Wan, Kang You, Guohua Zou

Ridge regression is an effective tool to handle multicollinearity in regressions. It is also an essential type of shrinkage and regularization methods and is widely used in big data and distributed data applications. The divide and conquer trick, which combines the estimator in each subset with equal weight, is commonly applied in distributed data. To overcome multicollinearity and improve estimation accuracy in the presence of distributed data, we propose a Mallows-type model averaging method for ridge regressions, which combines estimators from all subsets. Our method is proved to be asymptotically optimal allowing the number of subsets and the dimension of variables to be divergent. The consistency of the resultant weight estimators tending to the theoretically optimal weights is also derived. Furthermore, the asymptotic normality of the model averaging estimator is demonstrated. Our simulation study and real data analysis show that the proposed model averaging method often performs better than commonly used model selection and model averaging methods in distributed data cases.

{"title":"Distributed Mallows Model Averaging for Ridge Regressions","authors":"Haili Zhang,&nbsp;Alan T. K. Wan,&nbsp;Kang You,&nbsp;Guohua Zou","doi":"10.1007/s10114-025-3409-x","DOIUrl":"10.1007/s10114-025-3409-x","url":null,"abstract":"<div><p>Ridge regression is an effective tool to handle multicollinearity in regressions. It is also an essential type of shrinkage and regularization methods and is widely used in big data and distributed data applications. The divide and conquer trick, which combines the estimator in each subset with equal weight, is commonly applied in distributed data. To overcome multicollinearity and improve estimation accuracy in the presence of distributed data, we propose a Mallows-type model averaging method for ridge regressions, which combines estimators from all subsets. Our method is proved to be asymptotically optimal allowing the number of subsets and the dimension of variables to be divergent. The consistency of the resultant weight estimators tending to the theoretically optimal weights is also derived. Furthermore, the asymptotic normality of the model averaging estimator is demonstrated. Our simulation study and real data analysis show that the proposed model averaging method often performs better than commonly used model selection and model averaging methods in distributed data cases.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"780 - 826"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tail Dependence Matrices and Tests Based on Spearman’s ρ and Kendall’s τ
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3225-3
Lingyue Zhang, Dawei Lu, Hengjian Cui

Measuring and testing tail dependence is important in finance, insurance, and risk management. This paper proposes two tail dependence matrices based on classic rank correlation coefficients, which possess the desired population properties and interpretability. Their nonparametric estimators with strong consistency and asymptotic distributions are derived using the limit theory of U-processes. The simulation and application studies show that, compared to the tail dependence matrix based on Spearman’s ρ with large deviation, the Kendall-based tail dependence measure has stable variances under different tail conditions; thus, it is an effective approach to testing and quantifying tail dependence between random variables.

{"title":"Tail Dependence Matrices and Tests Based on Spearman’s ρ and Kendall’s τ","authors":"Lingyue Zhang,&nbsp;Dawei Lu,&nbsp;Hengjian Cui","doi":"10.1007/s10114-025-3225-3","DOIUrl":"10.1007/s10114-025-3225-3","url":null,"abstract":"<div><p>Measuring and testing tail dependence is important in finance, insurance, and risk management. This paper proposes two tail dependence matrices based on classic rank correlation coefficients, which possess the desired population properties and interpretability. Their nonparametric estimators with strong consistency and asymptotic distributions are derived using the limit theory of <i>U</i>-processes. The simulation and application studies show that, compared to the tail dependence matrix based on Spearman’s <i>ρ</i> with large deviation, the Kendall-based tail dependence measure has stable variances under different tail conditions; thus, it is an effective approach to testing and quantifying tail dependence between random variables.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"522 - 546"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Multi-Task Regression with Shifting Low-Rank Patterns
IF 0.8 3区 数学 Q2 MATHEMATICS Pub Date : 2025-02-15 DOI: 10.1007/s10114-025-3362-8
Junfeng Cui, Guanghui Wang, Fengyi Song, Xiaoyan Ma, Changliang Zou

We consider the problem of multi-task regression with time-varying low-rank patterns, where the collected data may be contaminated by heavy-tailed distributions and/or outliers. Our approach is based on a piecewise robust multi-task learning formulation, in which a robust loss function—not necessarily to be convex, but with a bounded derivative—is used, and each piecewise low-rank pattern is induced by a nuclear norm regularization term. We propose using the composite gradient descent algorithm to obtain stationary points within a data segment and employing the dynamic programming algorithm to determine the optimal segmentation. The theoretical properties of the detected number and time points of pattern shifts are studied under mild conditions. Numerical results confirm the effectiveness of our method.

{"title":"Robust Multi-Task Regression with Shifting Low-Rank Patterns","authors":"Junfeng Cui,&nbsp;Guanghui Wang,&nbsp;Fengyi Song,&nbsp;Xiaoyan Ma,&nbsp;Changliang Zou","doi":"10.1007/s10114-025-3362-8","DOIUrl":"10.1007/s10114-025-3362-8","url":null,"abstract":"<div><p>We consider the problem of multi-task regression with time-varying low-rank patterns, where the collected data may be contaminated by heavy-tailed distributions and/or outliers. Our approach is based on a piecewise robust multi-task learning formulation, in which a robust loss function—not necessarily to be convex, but with a bounded derivative—is used, and each piecewise low-rank pattern is induced by a nuclear norm regularization term. We propose using the composite gradient descent algorithm to obtain stationary points within a data segment and employing the dynamic programming algorithm to determine the optimal segmentation. The theoretical properties of the detected number and time points of pattern shifts are studied under mild conditions. Numerical results confirm the effectiveness of our method.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"677 - 702"},"PeriodicalIF":0.8,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Acta Mathematica Sinica-English Series
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1