{"title":"GOAL: Generalized Jointly Sparse Linear Discriminant Regression for Feature Extraction","authors":"Haoquan Lu;Zhihui Lai;Junhong Zhang;Zhuozhen Yu;Jiajun Wen","doi":"10.1109/TAI.2024.3412862","DOIUrl":null,"url":null,"abstract":"Ridge regression (RR)-based methods aim to obtain a low-dimensional subspace for feature extraction. However, the subspace's dimensionality does not exceed the number of data categories, hence compromising its capability of feature representation. Moreover, these methods with \n<inline-formula><tex-math>$L_{2}$</tex-math></inline-formula>\n-norm metric and regularization cannot extract highly robust features from data with corruption. To address these problems, in this article, we propose generalized jointly sparse linear discriminant regression (GOAL), a novel regression method based on joint \n<inline-formula><tex-math>$L_{2,1}$</tex-math></inline-formula>\n-norm and capped-\n<inline-formula><tex-math>$L_{2}$</tex-math></inline-formula>\n-norm, which can integrate sparsity, locality, and discriminability into one model to learn a full-rank robust feature extractor. The sparsely selected discriminative features are robust enough to characterize the decision boundary between classes. Locality is related to manifold structure and Laplacian smoothing, which can enhance the robustness of the model. By using the multinorm metric and regularization regression framework, the proposed method obtains the projection with joint sparsity and guarantees that the rank of the projection matrix will not be limited by the number of classes. An iterative algorithm is proposed to compute the optimal solution. Complexity analysis and proofs of convergence are also given in the article. Experiments on well-known datasets demonstrate our model's superiority and generalization ability.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 10","pages":"4959-4971"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10553382/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ridge regression (RR)-based methods aim to obtain a low-dimensional subspace for feature extraction. However, the subspace's dimensionality does not exceed the number of data categories, hence compromising its capability of feature representation. Moreover, these methods with
$L_{2}$
-norm metric and regularization cannot extract highly robust features from data with corruption. To address these problems, in this article, we propose generalized jointly sparse linear discriminant regression (GOAL), a novel regression method based on joint
$L_{2,1}$
-norm and capped-
$L_{2}$
-norm, which can integrate sparsity, locality, and discriminability into one model to learn a full-rank robust feature extractor. The sparsely selected discriminative features are robust enough to characterize the decision boundary between classes. Locality is related to manifold structure and Laplacian smoothing, which can enhance the robustness of the model. By using the multinorm metric and regularization regression framework, the proposed method obtains the projection with joint sparsity and guarantees that the rank of the projection matrix will not be limited by the number of classes. An iterative algorithm is proposed to compute the optimal solution. Complexity analysis and proofs of convergence are also given in the article. Experiments on well-known datasets demonstrate our model's superiority and generalization ability.