Fast Partition-Based Cross-Validation With Centering and Scaling for
X
T
X
$$ {\mathbf{X}}^{\mathbf{T}}\mathbf{X} $$
and
X
T
Y
$$ {\mathbf{X}}^{\mathbf{T}}\mathbf{Y} $$
{"title":"Fast Partition-Based Cross-Validation With Centering and Scaling for \n \n \n \n \n X\n \n \n T\n \n \n X\n \n $$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{X} $$\n and \n \n \n \n \n X\n \n \n T\n \n \n Y\n \n $$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{Y} $$","authors":"Ole-Christian Galbo Engstrøm, Martin Holm Jensen","doi":"10.1002/cem.70008","DOIUrl":null,"url":null,"abstract":"<p>We present algorithms that substantially accelerate partition-based cross-validation for machine learning models that require matrix products <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>X</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{X} $$</annotation>\n </semantics></math> and <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>Y</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{Y} $$</annotation>\n </semantics></math>. Our algorithms have applications in model selection for, for example, principal component analysis (PCA), principal component regression (PCR), ridge regression (RR), ordinary least squares (OLS), and partial least squares (PLS). Our algorithms support all combinations of column-wise centering and scaling of <span></span><math>\n <semantics>\n <mrow>\n <mi>X</mi>\n </mrow>\n <annotation>$$ \\mathbf{X} $$</annotation>\n </semantics></math> and <span></span><math>\n <semantics>\n <mrow>\n <mi>Y</mi>\n </mrow>\n <annotation>$$ \\mathbf{Y} $$</annotation>\n </semantics></math>, and we demonstrate in our accompanying implementation that this adds only a manageable, practical constant over efficient variants without preprocessing. We prove the correctness of our algorithms under a fold-based partitioning scheme and show that the running time is independent of the number of folds; that is, they have the same time complexity as that of computing <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>X</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{X} $$</annotation>\n </semantics></math> and <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>Y</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{Y} $$</annotation>\n </semantics></math> and space complexity equivalent to storing <span></span><math>\n <semantics>\n <mrow>\n <mi>X</mi>\n <mo>,</mo>\n <mspace></mspace>\n <mi>Y</mi>\n <mo>,</mo>\n <mspace></mspace>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>X</mi>\n </mrow>\n <annotation>$$ \\mathbf{X},\\mathbf{Y},{\\mathbf{X}}^{\\mathbf{T}}\\mathbf{X} $$</annotation>\n </semantics></math>, and <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>Y</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{Y} $$</annotation>\n </semantics></math>. Importantly, unlike alternatives found in the literature, we avoid data leakage due to preprocessing. We achieve these results by eliminating redundant computations in the overlap between training partitions. Concretely, we show how to manipulate <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>X</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{X} $$</annotation>\n </semantics></math> and <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>Y</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{Y} $$</annotation>\n </semantics></math> using only samples from the validation partition to obtain the preprocessed training partition-wise <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>X</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{X} $$</annotation>\n </semantics></math> and <span></span><math>\n <semantics>\n <mrow>\n <msup>\n <mrow>\n <mi>X</mi>\n </mrow>\n <mrow>\n <mi>T</mi>\n </mrow>\n </msup>\n <mi>Y</mi>\n </mrow>\n <annotation>$$ {\\mathbf{X}}^{\\mathbf{T}}\\mathbf{Y} $$</annotation>\n </semantics></math>. To our knowledge, we are the first to derive correct and efficient cross-validation algorithms for any of the 16 combinations of column-wise centering and scaling, for which we also prove only 12 give distinct matrix products.</p>","PeriodicalId":15274,"journal":{"name":"Journal of Chemometrics","volume":"39 3","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cem.70008","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Chemometrics","FirstCategoryId":"92","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cem.70008","RegionNum":4,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL WORK","Score":null,"Total":0}
引用次数: 0
Abstract
We present algorithms that substantially accelerate partition-based cross-validation for machine learning models that require matrix products and . Our algorithms have applications in model selection for, for example, principal component analysis (PCA), principal component regression (PCR), ridge regression (RR), ordinary least squares (OLS), and partial least squares (PLS). Our algorithms support all combinations of column-wise centering and scaling of and , and we demonstrate in our accompanying implementation that this adds only a manageable, practical constant over efficient variants without preprocessing. We prove the correctness of our algorithms under a fold-based partitioning scheme and show that the running time is independent of the number of folds; that is, they have the same time complexity as that of computing and and space complexity equivalent to storing , and . Importantly, unlike alternatives found in the literature, we avoid data leakage due to preprocessing. We achieve these results by eliminating redundant computations in the overlap between training partitions. Concretely, we show how to manipulate and using only samples from the validation partition to obtain the preprocessed training partition-wise and . To our knowledge, we are the first to derive correct and efficient cross-validation algorithms for any of the 16 combinations of column-wise centering and scaling, for which we also prove only 12 give distinct matrix products.
期刊介绍:
The Journal of Chemometrics is devoted to the rapid publication of original scientific papers, reviews and short communications on fundamental and applied aspects of chemometrics. It also provides a forum for the exchange of information on meetings and other news relevant to the growing community of scientists who are interested in chemometrics and its applications. Short, critical review papers are a particularly important feature of the journal, in view of the multidisciplinary readership at which it is aimed.