GOAL: Generalized Jointly Sparse Linear Discriminant Regression for Feature Extraction

Haoquan Lu;Zhihui Lai;Junhong Zhang;Zhuozhen Yu;Jiajun Wen
{"title":"GOAL: Generalized Jointly Sparse Linear Discriminant Regression for Feature Extraction","authors":"Haoquan Lu;Zhihui Lai;Junhong Zhang;Zhuozhen Yu;Jiajun Wen","doi":"10.1109/TAI.2024.3412862","DOIUrl":null,"url":null,"abstract":"Ridge regression (RR)-based methods aim to obtain a low-dimensional subspace for feature extraction. However, the subspace's dimensionality does not exceed the number of data categories, hence compromising its capability of feature representation. Moreover, these methods with \n<inline-formula><tex-math>$L_{2}$</tex-math></inline-formula>\n-norm metric and regularization cannot extract highly robust features from data with corruption. To address these problems, in this article, we propose generalized jointly sparse linear discriminant regression (GOAL), a novel regression method based on joint \n<inline-formula><tex-math>$L_{2,1}$</tex-math></inline-formula>\n-norm and capped-\n<inline-formula><tex-math>$L_{2}$</tex-math></inline-formula>\n-norm, which can integrate sparsity, locality, and discriminability into one model to learn a full-rank robust feature extractor. The sparsely selected discriminative features are robust enough to characterize the decision boundary between classes. Locality is related to manifold structure and Laplacian smoothing, which can enhance the robustness of the model. By using the multinorm metric and regularization regression framework, the proposed method obtains the projection with joint sparsity and guarantees that the rank of the projection matrix will not be limited by the number of classes. An iterative algorithm is proposed to compute the optimal solution. Complexity analysis and proofs of convergence are also given in the article. Experiments on well-known datasets demonstrate our model's superiority and generalization ability.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10553382/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Ridge regression (RR)-based methods aim to obtain a low-dimensional subspace for feature extraction. However, the subspace's dimensionality does not exceed the number of data categories, hence compromising its capability of feature representation. Moreover, these methods with $L_{2}$ -norm metric and regularization cannot extract highly robust features from data with corruption. To address these problems, in this article, we propose generalized jointly sparse linear discriminant regression (GOAL), a novel regression method based on joint $L_{2,1}$ -norm and capped- $L_{2}$ -norm, which can integrate sparsity, locality, and discriminability into one model to learn a full-rank robust feature extractor. The sparsely selected discriminative features are robust enough to characterize the decision boundary between classes. Locality is related to manifold structure and Laplacian smoothing, which can enhance the robustness of the model. By using the multinorm metric and regularization regression framework, the proposed method obtains the projection with joint sparsity and guarantees that the rank of the projection matrix will not be limited by the number of classes. An iterative algorithm is proposed to compute the optimal solution. Complexity analysis and proofs of convergence are also given in the article. Experiments on well-known datasets demonstrate our model's superiority and generalization ability.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
目标:用于特征提取的广义联合稀疏线性判别回归
基于岭回归(RR)的方法旨在获得用于特征提取的低维子空间。但是,子空间的维度不会超过数据类别的数量,因此影响了其特征表示能力。此外,这些使用$L_{2}$正则度量和正则化的方法无法从有损坏的数据中提取高鲁棒性的特征。为了解决这些问题,我们在本文中提出了广义联合稀疏线性判别回归(GOAL),这是一种基于联合 $L_{2,1}$ 正则和封顶 $L_{2}$ 正则的新型回归方法,它能将稀疏性、局部性和可判别性整合到一个模型中,以学习全等级鲁棒特征提取器。稀疏选取的判别特征具有足够的鲁棒性,可以描述类别之间的决策边界。局部性与流形结构和拉普拉斯平滑有关,可以增强模型的鲁棒性。通过使用多项式度量和正则化回归框架,所提出的方法可以获得具有联合稀疏性的投影,并保证投影矩阵的秩不会受到类别数量的限制。提出了一种迭代算法来计算最优解。文章还给出了复杂性分析和收敛性证明。在知名数据集上的实验证明了我们模型的优越性和泛化能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
Table of Contents Front Cover IEEE Transactions on Artificial Intelligence Publication Information Front Cover Table of Contents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1