二元反演问题的依赖耦合主成分分析。

Navdeep Dahiya, Yifei Fan, Samuel Bignardi, Romeil Sandhu, Anthony Yezzi
{"title":"二元反演问题的依赖耦合主成分分析。","authors":"Navdeep Dahiya, Yifei Fan, Samuel Bignardi, Romeil Sandhu, Anthony Yezzi","doi":"10.1109/icpr48806.2021.9413305","DOIUrl":null,"url":null,"abstract":"<p><p>Principal Component Analysis (PCA) is a widely used technique for dimensionality reduction in various problem domains, including data compression, image processing, visualization, exploratory data analysis, pattern recognition, time-series prediction, and machine learning. Often, data is presented in a correlated paired manner such that there exist observable and correlated unobservable measurements. Unfortunately, traditional PCA techniques generally fail to optimally capture the leverageable correlations between such paired data as it does not yield a maximally correlated basis between the observable and unobservable counterparts. This instead is the objective of Canonical Correlation Analysis (and the more general Partial Least Squares methods); however, such techniques are still symmetric in maximizing correlation (covariance for PLSR) over all choices of the basis for both datasets without differentiating between observable and unobservable variables (except for the regression phase of PLSR). Further, these methods deviate from PCA's formulation objective to minimize approximation error, seeking instead to maximize correlation or covariance. While these are sensible optimization objectives, they are not equivalent to error minimization. We therefore introduce a new method of leveraging PCA between paired datasets in a dependently coupled manner, which is optimal with respect to approximation error during training. We generate a dependently coupled paired basis for which we relax orthogonality constraints in decomposing unreliable unobservable measurements. In doing so, this allows us to optimally capture the variations of the observable data while conditionally minimizing the expected prediction error for the unobservable component. We show preliminary results that demonstrate improved learning of our proposed method compared to that of traditional techniques.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8330695/pdf/nihms-1676299.pdf","citationCount":"0","resultStr":"{\"title\":\"Dependently Coupled Principal Component Analysis for Bivariate Inversion Problems.\",\"authors\":\"Navdeep Dahiya, Yifei Fan, Samuel Bignardi, Romeil Sandhu, Anthony Yezzi\",\"doi\":\"10.1109/icpr48806.2021.9413305\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Principal Component Analysis (PCA) is a widely used technique for dimensionality reduction in various problem domains, including data compression, image processing, visualization, exploratory data analysis, pattern recognition, time-series prediction, and machine learning. Often, data is presented in a correlated paired manner such that there exist observable and correlated unobservable measurements. Unfortunately, traditional PCA techniques generally fail to optimally capture the leverageable correlations between such paired data as it does not yield a maximally correlated basis between the observable and unobservable counterparts. This instead is the objective of Canonical Correlation Analysis (and the more general Partial Least Squares methods); however, such techniques are still symmetric in maximizing correlation (covariance for PLSR) over all choices of the basis for both datasets without differentiating between observable and unobservable variables (except for the regression phase of PLSR). Further, these methods deviate from PCA's formulation objective to minimize approximation error, seeking instead to maximize correlation or covariance. While these are sensible optimization objectives, they are not equivalent to error minimization. We therefore introduce a new method of leveraging PCA between paired datasets in a dependently coupled manner, which is optimal with respect to approximation error during training. We generate a dependently coupled paired basis for which we relax orthogonality constraints in decomposing unreliable unobservable measurements. In doing so, this allows us to optimally capture the variations of the observable data while conditionally minimizing the expected prediction error for the unobservable component. We show preliminary results that demonstrate improved learning of our proposed method compared to that of traditional techniques.</p>\",\"PeriodicalId\":74516,\"journal\":{\"name\":\"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8330695/pdf/nihms-1676299.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icpr48806.2021.9413305\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/5/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icpr48806.2021.9413305","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/5/5 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

主成分分析(PCA)是一种广泛应用于各种问题领域的降维技术,包括数据压缩、图像处理、可视化、探索性数据分析、模式识别、时间序列预测和机器学习。通常情况下,数据是以相关配对的方式呈现的,因此存在可观察到的测量值和相关的不可观察到的测量值。遗憾的是,传统的 PCA 技术通常无法最佳地捕捉此类配对数据之间的可利用相关性,因为它无法在可观测和不可观测的对应数据之间建立最大相关性基础。而这正是典型相关分析法(以及更一般的偏最小二乘法)的目标;然而,这些技术仍然是对称的,即在两个数据集的所有基础选择中最大化相关性(PLSR 的协方差),而不区分可观测变量和不可观测变量(PLSR 的回归阶段除外)。此外,这些方法偏离了 PCA 最小化近似误差的表述目标,而是寻求最大化相关性或协方差。虽然这些都是合理的优化目标,但它们并不等同于误差最小化。因此,我们引入了一种新方法,以依赖耦合的方式利用配对数据集之间的 PCA,这种方法在训练过程中对近似误差是最优的。我们生成一个依赖耦合的配对基础,在分解不可靠的不可观测测量数据时,我们放宽了该基础的正交性约束。这样,我们就能以最佳方式捕捉可观测数据的变化,同时有条件地使不可观测部分的预期预测误差最小化。我们展示的初步结果表明,与传统技术相比,我们提出的方法提高了学习效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Dependently Coupled Principal Component Analysis for Bivariate Inversion Problems.

Principal Component Analysis (PCA) is a widely used technique for dimensionality reduction in various problem domains, including data compression, image processing, visualization, exploratory data analysis, pattern recognition, time-series prediction, and machine learning. Often, data is presented in a correlated paired manner such that there exist observable and correlated unobservable measurements. Unfortunately, traditional PCA techniques generally fail to optimally capture the leverageable correlations between such paired data as it does not yield a maximally correlated basis between the observable and unobservable counterparts. This instead is the objective of Canonical Correlation Analysis (and the more general Partial Least Squares methods); however, such techniques are still symmetric in maximizing correlation (covariance for PLSR) over all choices of the basis for both datasets without differentiating between observable and unobservable variables (except for the regression phase of PLSR). Further, these methods deviate from PCA's formulation objective to minimize approximation error, seeking instead to maximize correlation or covariance. While these are sensible optimization objectives, they are not equivalent to error minimization. We therefore introduce a new method of leveraging PCA between paired datasets in a dependently coupled manner, which is optimal with respect to approximation error during training. We generate a dependently coupled paired basis for which we relax orthogonality constraints in decomposing unreliable unobservable measurements. In doing so, this allows us to optimally capture the variations of the observable data while conditionally minimizing the expected prediction error for the unobservable component. We show preliminary results that demonstrate improved learning of our proposed method compared to that of traditional techniques.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.70
自引率
0.00%
发文量
0
期刊最新文献
Complexity of Representations in Deep Learning Extraction of Ruler Markings For Estimating Physical Size of Oral Lesions. TensorMixup Data Augmentation Method for Fully Automatic Brain Tumor Segmentation Classifying Breast Histopathology Images with a Ductal Instance-Oriented Pipeline. Directionally Paired Principal Component Analysis for Bivariate Estimation Problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1