Pub Date : 2024-07-16DOI: 10.1007/s10182-024-00506-1
Susanna Levantesi, Andrea Nigri, Paolo Pagnottoni, Alessandro Spelta
We propose to investigate the joint dynamics of regional gross domestic product and life expectancy in Italy through Wasserstein barycenter regression derived from optimal transport theory. Wasserstein barycenter regression has the advantage of being flexible in modeling complex data distributions, given its ability to capture multimodal relationships, while maintaining the possibility of incorporating uncertainty and priors, other than yielding interpretable results. The main findings reveal that regional clusters tend to emerge, highlighting inequalities in Italian regions in economic and life expectancy terms. This suggests that targeted policy actions at a regional level fostering equitable development, especially from an economic viewpoint, might reduce regional inequality. Our results are validated by a robustness check on a human mobility dataset and by an illustrative forecasting exercise, which confirms the model’s ability to estimate and predict joint distributions and produce novel empirical evidence.
{"title":"Wasserstein barycenter regression: application to the joint dynamics of regional GDP and life expectancy in Italy","authors":"Susanna Levantesi, Andrea Nigri, Paolo Pagnottoni, Alessandro Spelta","doi":"10.1007/s10182-024-00506-1","DOIUrl":"10.1007/s10182-024-00506-1","url":null,"abstract":"<div><p>We propose to investigate the joint dynamics of regional gross domestic product and life expectancy in Italy through Wasserstein barycenter regression derived from optimal transport theory. Wasserstein barycenter regression has the advantage of being flexible in modeling complex data distributions, given its ability to capture multimodal relationships, while maintaining the possibility of incorporating uncertainty and priors, other than yielding interpretable results. The main findings reveal that regional clusters tend to emerge, highlighting inequalities in Italian regions in economic and life expectancy terms. This suggests that targeted policy actions at a regional level fostering equitable development, especially from an economic viewpoint, might reduce regional inequality. Our results are validated by a robustness check on a human mobility dataset and by an illustrative forecasting exercise, which confirms the model’s ability to estimate and predict joint distributions and produce novel empirical evidence.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"109 2","pages":"313 - 336"},"PeriodicalIF":1.4,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00506-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1007/s10182-024-00507-0
Anagh Chattopadhyay, Soudeep Deb
It is often of primary interest to analyze and forecast the levels of a continuous phenomenon as a categorical variable. In this paper, we propose a new spatio-temporal model to deal with this problem in a binary setting, with an interesting application related to the COVID-19 pandemic, a phenomena that depends on both spatial proximity and temporal auto-correlation. Our model is defined through a hierarchical structure for the latent variable, which corresponds to the probit-link function. The mean of the latent variable in the proposed model is designed to capture the trend and the seasonal pattern as well as the lagged effects of relevant regressors. The covariance structure of the model is defined as an additive combination of a zero-mean spatio-temporally correlated process and a white noise process. The parameters associated with the space-time process enable us to analyze the effect of proximity of two points with respect to space or time and its influence on the overall process. For estimation and prediction, we adopt a complete Bayesian framework along with suitable prior specifications and utilize the concepts of Gibbs sampling. Using the county-level data from the state of New York, we show that the proposed methodology provides superior performance than benchmark techniques. We also use our model to devise a novel mechanism for predictive clustering which can be leveraged to develop localized policies.
{"title":"A spatio-temporal model for binary data and its application in analyzing the direction of COVID-19 spread","authors":"Anagh Chattopadhyay, Soudeep Deb","doi":"10.1007/s10182-024-00507-0","DOIUrl":"10.1007/s10182-024-00507-0","url":null,"abstract":"<div><p>It is often of primary interest to analyze and forecast the levels of a continuous phenomenon as a categorical variable. In this paper, we propose a new spatio-temporal model to deal with this problem in a binary setting, with an interesting application related to the COVID-19 pandemic, a phenomena that depends on both spatial proximity and temporal auto-correlation. Our model is defined through a hierarchical structure for the latent variable, which corresponds to the probit-link function. The mean of the latent variable in the proposed model is designed to capture the trend and the seasonal pattern as well as the lagged effects of relevant regressors. The covariance structure of the model is defined as an additive combination of a zero-mean spatio-temporally correlated process and a white noise process. The parameters associated with the space-time process enable us to analyze the effect of proximity of two points with respect to space or time and its influence on the overall process. For estimation and prediction, we adopt a complete Bayesian framework along with suitable prior specifications and utilize the concepts of Gibbs sampling. Using the county-level data from the state of New York, we show that the proposed methodology provides superior performance than benchmark techniques. We also use our model to devise a novel mechanism for predictive clustering which can be leveraged to develop localized policies.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 4","pages":"823 - 851"},"PeriodicalIF":1.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141567508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10182-024-00504-3
Jinsu Park, Yoonjin Lee, Daewon Yang, Jongho Park, Hohyun Jung
Considerable research has been devoted to understanding the popularity effect on the art market dynamics, meaning that artworks by popular artists tend to have high prices. The hedonic pricing model has employed artists’ reputation attributes, such as survey results, to understand the popularity effect, but the reputation attributes are constant and not properly defined at the point of artwork sales. Moreover, the artist’s ability has been measured via random effect in the hedonic model, which fails to reflect ability changes. To remedy these problems, we present a method to define the popularity measure using the artwork sales dataset without relying on the artist’s reputation attributes. Also, we propose a novel pricing model to appropriately infer the time-dependent artist’s abilities using the presented popularity measure. An inference algorithm is presented using the EM algorithm and Gibbs sampling to estimate model parameters and artist abilities. We use the Artnet dataset to investigate the size of the rich-get-richer effect and the variables affecting artwork prices in real-world art market dynamics. We further conduct inferences about artists’ abilities under the popularity effect and examine how ability changes over time for various artists with remarkable interpretations.
大量研究致力于了解艺术市场动态中的人气效应,即受欢迎艺术家的艺术品往往价格较高。对冲定价模型利用艺术家的声誉属性(如调查结果)来理解人气效应,但声誉属性是恒定的,在艺术品销售时并没有正确定义。此外,在对冲定价模型中,艺术家的能力是通过随机效应来衡量的,无法反映能力的变化。为了解决这些问题,我们提出了一种方法,利用艺术品销售数据集来定义受欢迎程度,而不依赖于艺术家的声誉属性。此外,我们还提出了一个新颖的定价模型,利用所提出的受欢迎程度指标来适当推断随时间变化的艺术家能力。我们还提出了一种推理算法,使用 EM 算法和吉布斯采样来估计模型参数和艺术家能力。我们使用 Artnet 数据集来研究 "富者愈富 "效应的大小以及在现实世界艺术市场动态中影响艺术品价格的变量。我们还进一步推断了艺术家在人气效应下的能力,并研究了不同艺术家的能力随时间的变化情况,具有显著的解释力。
{"title":"Artwork pricing model integrating the popularity and ability of artists","authors":"Jinsu Park, Yoonjin Lee, Daewon Yang, Jongho Park, Hohyun Jung","doi":"10.1007/s10182-024-00504-3","DOIUrl":"10.1007/s10182-024-00504-3","url":null,"abstract":"<div><p>Considerable research has been devoted to understanding the popularity effect on the art market dynamics, meaning that artworks by popular artists tend to have high prices. The hedonic pricing model has employed artists’ reputation attributes, such as survey results, to understand the popularity effect, but the reputation attributes are constant and not properly defined at the point of artwork sales. Moreover, the artist’s ability has been measured via random effect in the hedonic model, which fails to reflect ability changes. To remedy these problems, we present a method to define the popularity measure using the artwork sales dataset without relying on the artist’s reputation attributes. Also, we propose a novel pricing model to appropriately infer the time-dependent artist’s abilities using the presented popularity measure. An inference algorithm is presented using the EM algorithm and Gibbs sampling to estimate model parameters and artist abilities. We use the Artnet dataset to investigate the size of the rich-get-richer effect and the variables affecting artwork prices in real-world art market dynamics. We further conduct inferences about artists’ abilities under the popularity effect and examine how ability changes over time for various artists with remarkable interpretations.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 4","pages":"889 - 913"},"PeriodicalIF":1.4,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00504-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1007/s10182-024-00503-4
Benjamin Säfken, David Rügamer
{"title":"Editorial special issue: Bridging the gap between AI and Statistics","authors":"Benjamin Säfken, David Rügamer","doi":"10.1007/s10182-024-00503-4","DOIUrl":"10.1007/s10182-024-00503-4","url":null,"abstract":"","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"225 - 229"},"PeriodicalIF":1.4,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s10182-024-00501-6
Timo Adam, Marius Ötting, Rouven Michels
Decision trees constitute a simple yet powerful and interpretable machine learning tool. While tree-based methods are designed only for cross-sectional data, we propose an approach that combines decision trees with time series modeling and thereby bridges the gap between machine learning and statistics. In particular, we combine decision trees with hidden Markov models where, for any time point, an underlying (hidden) Markov chain selects the tree that generates the corresponding observation. We propose an estimation approach that is based on the expectation-maximisation algorithm and assess its feasibility in simulation experiments. In our real-data application, we use eight seasons of National Football League (NFL) data to predict play calls conditional on covariates, such as the current quarter and the score, where the model’s states can be linked to the teams’ strategies. R code that implements the proposed method is available on GitHub.
决策树是一种简单但功能强大、可解释的机器学习工具。虽然基于树的方法只适用于横截面数据,但我们提出了一种将决策树与时间序列建模相结合的方法,从而缩小了机器学习与统计学之间的差距。特别是,我们将决策树与隐马尔可夫模型相结合,对于任何时间点,底层(隐)马尔可夫链都会选择生成相应观测值的树。我们提出了一种基于期望最大化算法的估计方法,并在模拟实验中评估了其可行性。在我们的真实数据应用中,我们使用美国国家橄榄球联盟(NFL)八个赛季的数据来预测以当前季度和比分等协变量为条件的比赛调用,其中模型的状态可以与球队的策略相关联。实现该方法的 R 代码可在 GitHub 上获取。
{"title":"Markov-switching decision trees","authors":"Timo Adam, Marius Ötting, Rouven Michels","doi":"10.1007/s10182-024-00501-6","DOIUrl":"10.1007/s10182-024-00501-6","url":null,"abstract":"<div><p>Decision trees constitute a simple yet powerful and interpretable machine learning tool. While tree-based methods are designed only for cross-sectional data, we propose an approach that combines decision trees with time series modeling and thereby bridges the gap between machine learning and statistics. In particular, we combine decision trees with hidden Markov models where, for any time point, an underlying (hidden) Markov chain selects the tree that generates the corresponding observation. We propose an estimation approach that is based on the expectation-maximisation algorithm and assess its feasibility in simulation experiments. In our real-data application, we use eight seasons of National Football League (NFL) data to predict play calls conditional on covariates, such as the current quarter and the score, where the model’s states can be linked to the teams’ strategies. R code that implements the proposed method is available on GitHub.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"461 - 476"},"PeriodicalIF":1.4,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00501-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141170744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1007/s10182-024-00500-7
Roberto Colombi, Sabrina Giordano
When asked to assess their opinion about attitudes or perceptions on Likert-scale, respondents often endorse the midpoint or extremes of the scale and agree or disagree regardless of the content. These responding behaviors are known in the psychometric literature as middle, extremes, aquiescence and disacquiescence response styles that generally introduce bias in the results. One of the key motivations behind our approach is to account for these attitudes and how they evolve over time. The novelty of our proposal, in the context of longitudinal ordered categorical data, is in considering simultaneously the temporal dynamics of the responses (observable ordinal variables) and unobservable answering behaviors, possibly influenced by response styles, through a Markov switching logit model with two latent components. One component accommodates serial dependence and respondent’s unobserved heterogeneity, the other component determines the responding attitude (due to response styles or not). The dependence of the responses on covariates is modelled by a stereotype logit model with parameters varying according to the two latent components. The stereotype logit model is adopted because it is a flexible extension of the proportional odds logit model that retains the advantage of using a single parameter to describe a regressor effect. In the paper, a new interpretation of the parameters of the stereotype model is given by defining the allocation sets as intervals of values of the linear predictor that identify the most probable response. Unobserved heterogeneity, serial dependence and tendency to response style are modelled through our approach on longitudinal data, collected by the Bank of Italy.
{"title":"Markov switching stereotype logit models for longitudinal ordinal data affected by unobserved heterogeneity in responding behavior","authors":"Roberto Colombi, Sabrina Giordano","doi":"10.1007/s10182-024-00500-7","DOIUrl":"10.1007/s10182-024-00500-7","url":null,"abstract":"<div><p>When asked to assess their opinion about attitudes or perceptions on Likert-scale, respondents often endorse the midpoint or extremes of the scale and agree or disagree regardless of the content. These responding behaviors are known in the psychometric literature as middle, extremes, aquiescence and disacquiescence response styles that generally introduce bias in the results. One of the key motivations behind our approach is to account for these attitudes and how they evolve over time. The novelty of our proposal, in the context of longitudinal ordered categorical data, is in considering simultaneously the temporal dynamics of the responses (observable ordinal variables) and unobservable answering behaviors, possibly influenced by response styles, through a Markov switching logit model with two latent components. One component accommodates serial dependence and respondent’s unobserved heterogeneity, the other component determines the responding attitude (due to response styles or not). The dependence of the responses on covariates is modelled by a stereotype logit model with parameters varying according to the two latent components. The stereotype logit model is adopted because it is a flexible extension of the proportional odds logit model that retains the advantage of using a single parameter to describe a regressor effect. In the paper, a new interpretation of the parameters of the stereotype model is given by defining the allocation sets as intervals of values of the linear predictor that identify the most probable response. Unobserved heterogeneity, serial dependence and tendency to response style are modelled through our approach on longitudinal data, collected by the Bank of Italy.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"109 1","pages":"117 - 147"},"PeriodicalIF":1.4,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00500-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s10182-024-00502-5
Alexander Gerharz, Andreas Groll, Gunther Schauberger
In this article, a new kind of interpretable machine learning method is presented, which can help to understand the partition of the feature space into predicted classes in a classification model using quantile shifts, and this way make the underlying statistical or machine learning model more trustworthy. Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed. By comparing the predictions before and after the shifts, under certain conditions the observed changes in the predictions can be interpreted as neighborhoods of the classes with regard to the shifted features. Chord diagrams are used to visualize the observed changes. For illustration, this quantile shift method (QSM) is applied to an artificial example with medical labels and a real data example.
{"title":"Deducing neighborhoods of classes from a fitted model","authors":"Alexander Gerharz, Andreas Groll, Gunther Schauberger","doi":"10.1007/s10182-024-00502-5","DOIUrl":"10.1007/s10182-024-00502-5","url":null,"abstract":"<div><p>In this article, a new kind of interpretable machine learning method is presented, which can help to understand the partition of the feature space into predicted classes in a classification model using quantile shifts, and this way make the underlying statistical or machine learning model more trustworthy. Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed. By comparing the predictions before and after the shifts, under certain conditions the observed changes in the predictions can be interpreted as neighborhoods of the classes with regard to the shifted features. Chord diagrams are used to visualize the observed changes. For illustration, this quantile shift method (QSM) is applied to an artificial example with medical labels and a real data example.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"395 - 425"},"PeriodicalIF":1.4,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00502-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140936567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-13DOI: 10.1007/s10182-024-00498-y
Francesca Di Iorio, Riccardo Lucchetti, Rosaria Simone
In this paper, we propose a portmanteau test for misspecification in combination of uniform and binomial (CUB) models for the analysis of ordered rating data. Specifically, the test we build belongs to the class of information matrix (IM) tests that are based on the information matrix equality. Monte Carlo evidence indicates that the test has excellent properties in finite samples in terms of actual size and power versus several alternatives. Differently from other tests of the IM family, finite-sample adjustments based on the bootstrap seem to be unnecessary. An empirical application is also provided to illustrate how the IM test can be used to supplement model validation and selection.
在本文中,我们提出了一种用于分析有序评级数据的统一和二项(CUB)组合模型的波特曼检验法(portmanteau test)。具体来说,我们建立的检验属于基于信息矩阵相等的信息矩阵(IM)检验。蒙特卡洛证据表明,在有限样本中,该检验在实际规模和功率方面相对于几种备选方案都具有出色的特性。与 IM 系列的其他检验不同,基于引导的有限样本调整似乎是不必要的。本文还提供了一个经验应用,以说明如何使用 IM 检验来补充模型验证和选择。
{"title":"Testing distributional assumptions in CUB models for the analysis of rating data","authors":"Francesca Di Iorio, Riccardo Lucchetti, Rosaria Simone","doi":"10.1007/s10182-024-00498-y","DOIUrl":"10.1007/s10182-024-00498-y","url":null,"abstract":"<div><p>In this paper, we propose a <i>portmanteau</i> test for misspecification in combination of uniform and binomial (CUB) models for the analysis of ordered rating data. Specifically, the test we build belongs to the class of information matrix (IM) tests that are based on the information matrix equality. Monte Carlo evidence indicates that the test has excellent properties in finite samples in terms of actual size and power versus several alternatives. Differently from other tests of the IM family, finite-sample adjustments based on the bootstrap seem to be unnecessary. An empirical application is also provided to illustrate how the IM test can be used to supplement model validation and selection.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 3","pages":"669 - 701"},"PeriodicalIF":1.4,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00498-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1007/s10182-024-00499-x
Jan Beran, Jeremy Näscher, Fabian Pietsch, Stephan Walterspacher
A frequent problem in applied time series analysis is the identification of dominating periodic components. A particularly difficult task is to distinguish deterministic periodic signals from periodic long memory. In this paper, a family of test statistics based on Whittle’s Gaussian log-likelihood approximation is proposed. Asymptotic critical regions and bounds for the asymptotic power are derived. In cases where a deterministic periodic signal and periodic long memory share the same frequency, consistency and rates of type II error probabilities depend on the long-memory parameter. Simulations and an application to respiratory muscle training data illustrate the results.
在应用时间序列分析中,一个经常遇到的问题是如何识别占主导地位的周期成分。一个特别困难的任务是将确定性周期信号与周期性长记忆区分开来。本文提出了基于惠特尔高斯对数似然近似的检验统计量系列。推导出了渐近临界区和渐近功率的边界。在确定性周期信号和周期性长记忆共享相同频率的情况下,一致性和 II 型错误概率率取决于长记忆参数。模拟和呼吸肌训练数据的应用说明了这些结果。
{"title":"Testing for periodicity at an unknown frequency under cyclic long memory, with applications to respiratory muscle training","authors":"Jan Beran, Jeremy Näscher, Fabian Pietsch, Stephan Walterspacher","doi":"10.1007/s10182-024-00499-x","DOIUrl":"10.1007/s10182-024-00499-x","url":null,"abstract":"<div><p>A frequent problem in applied time series analysis is the identification of dominating periodic components. A particularly difficult task is to distinguish deterministic periodic signals from periodic long memory. In this paper, a family of test statistics based on Whittle’s Gaussian log-likelihood approximation is proposed. Asymptotic critical regions and bounds for the asymptotic power are derived. In cases where a deterministic periodic signal and periodic long memory share the same frequency, consistency and rates of type II error probabilities depend on the long-memory parameter. Simulations and an application to respiratory muscle training data illustrate the results.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 4","pages":"705 - 731"},"PeriodicalIF":1.4,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00499-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s10182-024-00497-z
Oliver Dürr, Stefan Hörtling, Danil Dold, Ivonne Kovylov, Beate Sick
Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.
{"title":"Bernstein flows for flexible posteriors in variational Bayes","authors":"Oliver Dürr, Stefan Hörtling, Danil Dold, Ivonne Kovylov, Beate Sick","doi":"10.1007/s10182-024-00497-z","DOIUrl":"10.1007/s10182-024-00497-z","url":null,"abstract":"<div><p>Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.</p></div>","PeriodicalId":55446,"journal":{"name":"Asta-Advances in Statistical Analysis","volume":"108 2","pages":"375 - 394"},"PeriodicalIF":1.4,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10182-024-00497-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}