{"title":"增强可辨别性和可恢复性的健壮通用 PCA。","authors":"Zhenlei Dai , Liangchen Hu , Huaijiang Sun","doi":"10.1016/j.neunet.2024.106814","DOIUrl":null,"url":null,"abstract":"<div><div>The dependency of low-dimensional embedding to principal component space seriously limits the effectiveness of existing robust principal component analysis (PCA) algorithms. Simply projecting the original sample coordinates onto orthogonal principal component directions may not effectively address various noise-corrupted scenarios, impairing both discriminability and recoverability. Our method addresses this issue through a generalized PCA (GPCA), which optimizes regression bias rather than sample mean, leading to more adaptable properties. And, we propose a robust GPCA model with joint loss and regularization based on the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mi>μ</mi></mrow></msub></math></span> norm and <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mi>ν</mi></mrow></msub></math></span> norms, respectively. This approach not only mitigates sensitivity to outliers but also enhances feature extraction and selection flexibility. Additionally, we introduce a truncated and reweighted loss strategy, where truncation eliminates severely deviated outliers, and reweighting prioritizes the remaining samples. These innovations collectively improve the GPCA model’s performance. To solve the proposed model, we propose a non-greedy iterative algorithm and theoretically guarantee the convergence. Experimental results demonstrate that the proposed GPCA model outperforms the previous robust PCA models in both recoverability and discrimination.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106814"},"PeriodicalIF":6.0000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust generalized PCA for enhancing discriminability and recoverability\",\"authors\":\"Zhenlei Dai , Liangchen Hu , Huaijiang Sun\",\"doi\":\"10.1016/j.neunet.2024.106814\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The dependency of low-dimensional embedding to principal component space seriously limits the effectiveness of existing robust principal component analysis (PCA) algorithms. Simply projecting the original sample coordinates onto orthogonal principal component directions may not effectively address various noise-corrupted scenarios, impairing both discriminability and recoverability. Our method addresses this issue through a generalized PCA (GPCA), which optimizes regression bias rather than sample mean, leading to more adaptable properties. And, we propose a robust GPCA model with joint loss and regularization based on the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mi>μ</mi></mrow></msub></math></span> norm and <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mi>ν</mi></mrow></msub></math></span> norms, respectively. This approach not only mitigates sensitivity to outliers but also enhances feature extraction and selection flexibility. Additionally, we introduce a truncated and reweighted loss strategy, where truncation eliminates severely deviated outliers, and reweighting prioritizes the remaining samples. These innovations collectively improve the GPCA model’s performance. To solve the proposed model, we propose a non-greedy iterative algorithm and theoretically guarantee the convergence. Experimental results demonstrate that the proposed GPCA model outperforms the previous robust PCA models in both recoverability and discrimination.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"181 \",\"pages\":\"Article 106814\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S089360802400738X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802400738X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Robust generalized PCA for enhancing discriminability and recoverability
The dependency of low-dimensional embedding to principal component space seriously limits the effectiveness of existing robust principal component analysis (PCA) algorithms. Simply projecting the original sample coordinates onto orthogonal principal component directions may not effectively address various noise-corrupted scenarios, impairing both discriminability and recoverability. Our method addresses this issue through a generalized PCA (GPCA), which optimizes regression bias rather than sample mean, leading to more adaptable properties. And, we propose a robust GPCA model with joint loss and regularization based on the norm and norms, respectively. This approach not only mitigates sensitivity to outliers but also enhances feature extraction and selection flexibility. Additionally, we introduce a truncated and reweighted loss strategy, where truncation eliminates severely deviated outliers, and reweighting prioritizes the remaining samples. These innovations collectively improve the GPCA model’s performance. To solve the proposed model, we propose a non-greedy iterative algorithm and theoretically guarantee the convergence. Experimental results demonstrate that the proposed GPCA model outperforms the previous robust PCA models in both recoverability and discrimination.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.