Pub Date : 2024-06-21DOI: 10.1007/s10182-024-00503-4
Benjamin Säfken, David Rügamer
{"title":"Editorial special issue: Bridging the gap between AI and Statistics","authors":"Benjamin Säfken, David Rügamer","doi":"10.1007/s10182-024-00503-4","DOIUrl":"10.1007/s10182-024-00503-4","url":null,"abstract":"","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"225 - 229"},"PeriodicalIF":1.4,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s10182-024-00501-6
Timo Adam, Marius Ötting, Rouven Michels
Decision trees constitute a simple yet powerful and interpretable machine learning tool. While tree-based methods are designed only for cross-sectional data, we propose an approach that combines decision trees with time series modeling and thereby bridges the gap between machine learning and statistics. In particular, we combine decision trees with hidden Markov models where, for any time point, an underlying (hidden) Markov chain selects the tree that generates the corresponding observation. We propose an estimation approach that is based on the expectation-maximisation algorithm and assess its feasibility in simulation experiments. In our real-data application, we use eight seasons of National Football League (NFL) data to predict play calls conditional on covariates, such as the current quarter and the score, where the model’s states can be linked to the teams’ strategies. R code that implements the proposed method is available on GitHub.
决策树是一种简单但功能强大、可解释的机器学习工具。虽然基于树的方法只适用于横截面数据,但我们提出了一种将决策树与时间序列建模相结合的方法,从而缩小了机器学习与统计学之间的差距。特别是,我们将决策树与隐马尔可夫模型相结合,对于任何时间点,底层(隐)马尔可夫链都会选择生成相应观测值的树。我们提出了一种基于期望最大化算法的估计方法,并在模拟实验中评估了其可行性。在我们的真实数据应用中,我们使用美国国家橄榄球联盟(NFL)八个赛季的数据来预测以当前季度和比分等协变量为条件的比赛调用,其中模型的状态可以与球队的策略相关联。实现该方法的 R 代码可在 GitHub 上获取。
{"title":"Markov-switching decision trees","authors":"Timo Adam, Marius Ötting, Rouven Michels","doi":"10.1007/s10182-024-00501-6","DOIUrl":"10.1007/s10182-024-00501-6","url":null,"abstract":"<div><p>Decision trees constitute a simple yet powerful and interpretable machine learning tool. While tree-based methods are designed only for cross-sectional data, we propose an approach that combines decision trees with time series modeling and thereby bridges the gap between machine learning and statistics. In particular, we combine decision trees with hidden Markov models where, for any time point, an underlying (hidden) Markov chain selects the tree that generates the corresponding observation. We propose an estimation approach that is based on the expectation-maximisation algorithm and assess its feasibility in simulation experiments. In our real-data application, we use eight seasons of National Football League (NFL) data to predict play calls conditional on covariates, such as the current quarter and the score, where the model’s states can be linked to the teams’ strategies. R code that implements the proposed method is available on GitHub.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"461 - 476"},"PeriodicalIF":1.4,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00501-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141170744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1007/s10182-024-00500-7
Roberto Colombi, Sabrina Giordano
When asked to assess their opinion about attitudes or perceptions on Likert-scale, respondents often endorse the midpoint or extremes of the scale and agree or disagree regardless of the content. These responding behaviors are known in the psychometric literature as middle, extremes, aquiescence and disacquiescence response styles that generally introduce bias in the results. One of the key motivations behind our approach is to account for these attitudes and how they evolve over time. The novelty of our proposal, in the context of longitudinal ordered categorical data, is in considering simultaneously the temporal dynamics of the responses (observable ordinal variables) and unobservable answering behaviors, possibly influenced by response styles, through a Markov switching logit model with two latent components. One component accommodates serial dependence and respondent’s unobserved heterogeneity, the other component determines the responding attitude (due to response styles or not). The dependence of the responses on covariates is modelled by a stereotype logit model with parameters varying according to the two latent components. The stereotype logit model is adopted because it is a flexible extension of the proportional odds logit model that retains the advantage of using a single parameter to describe a regressor effect. In the paper, a new interpretation of the parameters of the stereotype model is given by defining the allocation sets as intervals of values of the linear predictor that identify the most probable response. Unobserved heterogeneity, serial dependence and tendency to response style are modelled through our approach on longitudinal data, collected by the Bank of Italy.
{"title":"Markov switching stereotype logit models for longitudinal ordinal data affected by unobserved heterogeneity in responding behavior","authors":"Roberto Colombi, Sabrina Giordano","doi":"10.1007/s10182-024-00500-7","DOIUrl":"10.1007/s10182-024-00500-7","url":null,"abstract":"<div><p>When asked to assess their opinion about attitudes or perceptions on Likert-scale, respondents often endorse the midpoint or extremes of the scale and agree or disagree regardless of the content. These responding behaviors are known in the psychometric literature as middle, extremes, aquiescence and disacquiescence response styles that generally introduce bias in the results. One of the key motivations behind our approach is to account for these attitudes and how they evolve over time. The novelty of our proposal, in the context of longitudinal ordered categorical data, is in considering simultaneously the temporal dynamics of the responses (observable ordinal variables) and unobservable answering behaviors, possibly influenced by response styles, through a Markov switching logit model with two latent components. One component accommodates serial dependence and respondent’s unobserved heterogeneity, the other component determines the responding attitude (due to response styles or not). The dependence of the responses on covariates is modelled by a stereotype logit model with parameters varying according to the two latent components. The stereotype logit model is adopted because it is a flexible extension of the proportional odds logit model that retains the advantage of using a single parameter to describe a regressor effect. In the paper, a new interpretation of the parameters of the stereotype model is given by defining the allocation sets as intervals of values of the linear predictor that identify the most probable response. Unobserved heterogeneity, serial dependence and tendency to response style are modelled through our approach on longitudinal data, collected by the Bank of Italy.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"109 1","pages":"117 - 147"},"PeriodicalIF":1.4,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00500-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s10182-024-00502-5
Alexander Gerharz, Andreas Groll, Gunther Schauberger
In this article, a new kind of interpretable machine learning method is presented, which can help to understand the partition of the feature space into predicted classes in a classification model using quantile shifts, and this way make the underlying statistical or machine learning model more trustworthy. Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed. By comparing the predictions before and after the shifts, under certain conditions the observed changes in the predictions can be interpreted as neighborhoods of the classes with regard to the shifted features. Chord diagrams are used to visualize the observed changes. For illustration, this quantile shift method (QSM) is applied to an artificial example with medical labels and a real data example.
{"title":"Deducing neighborhoods of classes from a fitted model","authors":"Alexander Gerharz, Andreas Groll, Gunther Schauberger","doi":"10.1007/s10182-024-00502-5","DOIUrl":"10.1007/s10182-024-00502-5","url":null,"abstract":"<div><p>In this article, a new kind of interpretable machine learning method is presented, which can help to understand the partition of the feature space into predicted classes in a classification model using quantile shifts, and this way make the underlying statistical or machine learning model more trustworthy. Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed. By comparing the predictions before and after the shifts, under certain conditions the observed changes in the predictions can be interpreted as neighborhoods of the classes with regard to the shifted features. Chord diagrams are used to visualize the observed changes. For illustration, this quantile shift method (QSM) is applied to an artificial example with medical labels and a real data example.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"395 - 425"},"PeriodicalIF":1.4,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00502-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-13DOI: 10.1007/s10182-024-00498-y
Francesca Di Iorio, Riccardo Lucchetti, Rosaria Simone
In this paper, we propose a portmanteau test for misspecification in combination of uniform and binomial (CUB) models for the analysis of ordered rating data. Specifically, the test we build belongs to the class of information matrix (IM) tests that are based on the information matrix equality. Monte Carlo evidence indicates that the test has excellent properties in finite samples in terms of actual size and power versus several alternatives. Differently from other tests of the IM family, finite-sample adjustments based on the bootstrap seem to be unnecessary. An empirical application is also provided to illustrate how the IM test can be used to supplement model validation and selection.
在本文中,我们提出了一种用于分析有序评级数据的统一和二项(CUB)组合模型的波特曼检验法(portmanteau test)。具体来说,我们建立的检验属于基于信息矩阵相等的信息矩阵(IM)检验。蒙特卡洛证据表明,在有限样本中,该检验在实际规模和功率方面相对于几种备选方案都具有出色的特性。与 IM 系列的其他检验不同,基于引导的有限样本调整似乎是不必要的。本文还提供了一个经验应用,以说明如何使用 IM 检验来补充模型验证和选择。
{"title":"Testing distributional assumptions in CUB models for the analysis of rating data","authors":"Francesca Di Iorio, Riccardo Lucchetti, Rosaria Simone","doi":"10.1007/s10182-024-00498-y","DOIUrl":"10.1007/s10182-024-00498-y","url":null,"abstract":"<div><p>In this paper, we propose a <i>portmanteau</i> test for misspecification in combination of uniform and binomial (CUB) models for the analysis of ordered rating data. Specifically, the test we build belongs to the class of information matrix (IM) tests that are based on the information matrix equality. Monte Carlo evidence indicates that the test has excellent properties in finite samples in terms of actual size and power versus several alternatives. Differently from other tests of the IM family, finite-sample adjustments based on the bootstrap seem to be unnecessary. An empirical application is also provided to illustrate how the IM test can be used to supplement model validation and selection.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 3","pages":"669 - 701"},"PeriodicalIF":1.4,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00498-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1007/s10182-024-00499-x
Jan Beran, Jeremy Näscher, Fabian Pietsch, Stephan Walterspacher
A frequent problem in applied time series analysis is the identification of dominating periodic components. A particularly difficult task is to distinguish deterministic periodic signals from periodic long memory. In this paper, a family of test statistics based on Whittle’s Gaussian log-likelihood approximation is proposed. Asymptotic critical regions and bounds for the asymptotic power are derived. In cases where a deterministic periodic signal and periodic long memory share the same frequency, consistency and rates of type II error probabilities depend on the long-memory parameter. Simulations and an application to respiratory muscle training data illustrate the results.
在应用时间序列分析中,一个经常遇到的问题是如何识别占主导地位的周期成分。一个特别困难的任务是将确定性周期信号与周期性长记忆区分开来。本文提出了基于惠特尔高斯对数似然近似的检验统计量系列。推导出了渐近临界区和渐近功率的边界。在确定性周期信号和周期性长记忆共享相同频率的情况下,一致性和 II 型错误概率率取决于长记忆参数。模拟和呼吸肌训练数据的应用说明了这些结果。
{"title":"Testing for periodicity at an unknown frequency under cyclic long memory, with applications to respiratory muscle training","authors":"Jan Beran, Jeremy Näscher, Fabian Pietsch, Stephan Walterspacher","doi":"10.1007/s10182-024-00499-x","DOIUrl":"10.1007/s10182-024-00499-x","url":null,"abstract":"<div><p>A frequent problem in applied time series analysis is the identification of dominating periodic components. A particularly difficult task is to distinguish deterministic periodic signals from periodic long memory. In this paper, a family of test statistics based on Whittle’s Gaussian log-likelihood approximation is proposed. Asymptotic critical regions and bounds for the asymptotic power are derived. In cases where a deterministic periodic signal and periodic long memory share the same frequency, consistency and rates of type II error probabilities depend on the long-memory parameter. Simulations and an application to respiratory muscle training data illustrate the results.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 4","pages":"705 - 731"},"PeriodicalIF":1.4,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00499-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s10182-024-00497-z
Oliver Dürr, Stefan Hörtling, Danil Dold, Ivonne Kovylov, Beate Sick
Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.
{"title":"Bernstein flows for flexible posteriors in variational Bayes","authors":"Oliver Dürr, Stefan Hörtling, Danil Dold, Ivonne Kovylov, Beate Sick","doi":"10.1007/s10182-024-00497-z","DOIUrl":"10.1007/s10182-024-00497-z","url":null,"abstract":"<div><p>Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"375 - 394"},"PeriodicalIF":1.4,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00497-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s10182-024-00492-4
Jens Lichter, Paul F V Wiemann, Thomas Kneib
Markov chain Monte Carlo (MCMC)-based simulation approaches are by far the most common method in Bayesian inference to access the posterior distribution. Recently, motivated by successes in machine learning, variational inference (VI) has gained in interest in statistics since it promises a computationally efficient alternative to MCMC enabling approximate access to the posterior. Classical approaches such as mean-field VI (MFVI), however, are based on the strong mean-field assumption for the approximate posterior where parameters or parameter blocks are assumed to be mutually independent. As a consequence, parameter uncertainties are often underestimated and alternatives such as semi-implicit VI (SIVI) have been suggested to avoid the mean-field assumption and to improve uncertainty estimates. SIVI uses a hierarchical construction of the variational parameters to restore parameter dependencies and relies on a highly flexible implicit mixing distribution whose probability density function is not analytic but samples can be taken via a stochastic procedure. With this paper, we investigate how different forms of VI perform in semiparametric additive regression models as one of the most important fields of application of Bayesian inference in statistics. A particular focus is on the ability of the rivalling approaches to quantify uncertainty, especially with correlated covariates that are likely to aggravate the difficulties of simplifying VI assumptions. Moreover, we propose a method, where we combine both advantages of MFVI and SIVI and compare its performance. The different VI approaches are studied in comparison with MCMC in simulations and an application to tree height models of douglas fir based on a large-scale forestry data set.
基于马尔科夫链蒙特卡罗(MCMC)的模拟方法是贝叶斯推理中迄今为止最常用的获取后验分布的方法。最近,在机器学习取得成功的推动下,变分推理(VI)在统计学中越来越受到关注,因为它有望成为 MCMC 的一种计算高效的替代方法,能够近似访问后验分布。然而,均值场变分推理(MFVI)等经典方法是基于近似后验的强均值场假设,其中参数或参数块被假定为相互独立的。因此,参数的不确定性往往被低估,人们提出了半隐式 VI(SIVI)等替代方法,以避免均值场假设并改进不确定性估计。SIVI 使用变分参数的分层结构来恢复参数依赖关系,并依赖于高度灵活的隐式混合分布,其概率密度函数不是解析的,但可以通过随机过程取样。本文研究了不同形式的 VI 在半参数加法回归模型中的表现,该模型是贝叶斯推理在统计学中最重要的应用领域之一。本文特别关注了不同方法量化不确定性的能力,尤其是在相关协变量可能加剧简化 VI 假设困难的情况下。此外,我们还提出了一种方法,该方法结合了 MFVI 和 SIVI 的优点,并对其性能进行了比较。我们将不同的 VI 方法与模拟 MCMC 进行了比较研究,并将其应用于基于大规模林业数据集的道格拉斯杉树高模型。
{"title":"Variational inference: uncertainty quantification in additive models","authors":"Jens Lichter, Paul F V Wiemann, Thomas Kneib","doi":"10.1007/s10182-024-00492-4","DOIUrl":"10.1007/s10182-024-00492-4","url":null,"abstract":"<div><p>Markov chain Monte Carlo (MCMC)-based simulation approaches are by far the most common method in Bayesian inference to access the posterior distribution. Recently, motivated by successes in machine learning, variational inference (VI) has gained in interest in statistics since it promises a computationally efficient alternative to MCMC enabling approximate access to the posterior. Classical approaches such as mean-field VI (MFVI), however, are based on the strong mean-field assumption for the approximate posterior where parameters or parameter blocks are assumed to be mutually independent. As a consequence, parameter uncertainties are often underestimated and alternatives such as semi-implicit VI (SIVI) have been suggested to avoid the mean-field assumption and to improve uncertainty estimates. SIVI uses a hierarchical construction of the variational parameters to restore parameter dependencies and relies on a highly flexible implicit mixing distribution whose probability density function is not analytic but samples can be taken via a stochastic procedure. With this paper, we investigate how different forms of VI perform in semiparametric additive regression models as one of the most important fields of application of Bayesian inference in statistics. A particular focus is on the ability of the rivalling approaches to quantify uncertainty, especially with correlated covariates that are likely to aggravate the difficulties of simplifying VI assumptions. Moreover, we propose a method, where we combine both advantages of MFVI and SIVI and compare its performance. The different VI approaches are studied in comparison with MCMC in simulations and an application to tree height models of douglas fir based on a large-scale forestry data set.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"279 - 331"},"PeriodicalIF":1.4,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00492-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1007/s10182-024-00496-0
Cristina O. Chavez-Chong, Cécile Hardouin, Ana-Karina Fermin
This work proposes a new method for building an explanatory spatial autoregressive model in a multicollinearity context. We use Ridge regularization to bypass the collinearity issue. We present new estimation algorithms that allow for the estimation of the regression coefficients as well as the spatial dependence parameter. A spatial cross-validation procedure is used to tune the regularization parameter. In fact, ordinary cross-validation techniques are not applicable to spatially dependent observations. Variable importance is assessed by permutation tests since classical tests are not valid after Ridge regularization. We assess the performance of our methodology through numerical experiments conducted on simulated synthetic data. Finally, we apply our method to a real data set and evaluate the impact of some socioeconomic variables on the COVID-19 intensity in France.
{"title":"Ridge regularization for spatial autoregressive models with multicollinearity issues","authors":"Cristina O. Chavez-Chong, Cécile Hardouin, Ana-Karina Fermin","doi":"10.1007/s10182-024-00496-0","DOIUrl":"10.1007/s10182-024-00496-0","url":null,"abstract":"<div><p>This work proposes a new method for building an explanatory spatial autoregressive model in a multicollinearity context. We use Ridge regularization to bypass the collinearity issue. We present new estimation algorithms that allow for the estimation of the regression coefficients as well as the spatial dependence parameter. A spatial cross-validation procedure is used to tune the regularization parameter. In fact, ordinary cross-validation techniques are not applicable to spatially dependent observations. Variable importance is assessed by permutation tests since classical tests are not valid after Ridge regularization. We assess the performance of our methodology through numerical experiments conducted on simulated synthetic data. Finally, we apply our method to a real data set and evaluate the impact of some socioeconomic variables on the COVID-19 intensity in France.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"109 1","pages":"25 - 52"},"PeriodicalIF":1.4,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1007/s10182-024-00495-1
Philip Buczak, Andreas Groll, Markus Pauly, Jakob Rehof, Daniel Horn
Hyperparameter tuning is one of the most time-consuming parts in machine learning. Despite the existence of modern optimization algorithms that minimize the number of evaluations needed, evaluations of a single setting may still be expensive. Usually a resampling technique is used, where the machine learning method has to be fitted a fixed number of k times on different training datasets. The respective mean performance of the k fits is then used as performance estimator. Many hyperparameter settings could be discarded after less than k resampling iterations if they are clearly inferior to high-performing settings. However, resampling is often performed until the very end, wasting a lot of computational effort. To this end, we propose the sequential random search (SQRS) which extends the regular random search algorithm by a sequential testing procedure aimed at detecting and eliminating inferior parameter configurations early. We compared our SQRS with regular random search using multiple publicly available regression and classification datasets. Our simulation study showed that the SQRS is able to find similarly well-performing parameter settings while requiring noticeably fewer evaluations. Our results underscore the potential for integrating sequential tests into hyperparameter tuning.
超参数调整是机器学习中最耗时的部分之一。尽管现代优化算法可以最大限度地减少所需的评估次数,但对单个设置的评估仍可能非常昂贵。通常会使用重采样技术,即在不同的训练数据集上对机器学习方法进行固定次数的 k 次拟合。然后将 k 次拟合各自的平均性能作为性能估计值。如果许多超参数设置明显不如高性能设置,那么可以在少于 k 次的重采样迭代后将其舍弃。然而,重采样往往要到最后才进行,浪费了大量的计算资源。为此,我们提出了顺序随机搜索(SQRS),它通过一个顺序测试程序扩展了常规随机搜索算法,旨在及早检测和消除劣质参数配置。我们使用多个公开的回归和分类数据集对 SQRS 和常规随机搜索进行了比较。我们的模拟研究表明,SQRS 能够找到类似的性能良好的参数设置,而所需的评估次数却明显减少。我们的结果强调了将顺序测试整合到超参数调整中的潜力。
{"title":"Using sequential statistical tests for efficient hyperparameter tuning","authors":"Philip Buczak, Andreas Groll, Markus Pauly, Jakob Rehof, Daniel Horn","doi":"10.1007/s10182-024-00495-1","DOIUrl":"10.1007/s10182-024-00495-1","url":null,"abstract":"<div><p>Hyperparameter tuning is one of the most time-consuming parts in machine learning. Despite the existence of modern optimization algorithms that minimize the number of evaluations needed, evaluations of a single setting may still be expensive. Usually a resampling technique is used, where the machine learning method has to be fitted a fixed number of <i>k</i> times on different training datasets. The respective mean performance of the <i>k</i> fits is then used as performance estimator. Many hyperparameter settings could be discarded after less than <i>k</i> resampling iterations if they are clearly inferior to high-performing settings. However, resampling is often performed until the very end, wasting a lot of computational effort. To this end, we propose the sequential random search (SQRS) which extends the regular random search algorithm by a sequential testing procedure aimed at detecting and eliminating inferior parameter configurations early. We compared our SQRS with regular random search using multiple publicly available regression and classification datasets. Our simulation study showed that the SQRS is able to find similarly well-performing parameter settings while requiring noticeably fewer evaluations. Our results underscore the potential for integrating sequential tests into hyperparameter tuning.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"441 - 460"},"PeriodicalIF":1.4,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00495-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140124518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}