The mixed causal‐non‐causal invertible‐non‐invertible autoregressive moving‐average (MARMA) models have the advantage of incorporating roots inside the unit circle, thus adjusting the dynamics of financial returns that depend on future expectations. This article introduces new techniques for estimating, identifying and simulating MARMA models. Although the estimation of the parameters is done using second‐order moments, the identification relies on the existence of high‐order dynamics, captured in the high‐order spectral densities and the correlation of the squared residuals. A comprehensive Monte Carlo study demonstrated the robust performance of our estimation and identification methods. We propose an empirical application to 24 portfolios from emerging markets based on the factors: size, book‐to‐market, profitability, investment and momentum. All portfolios exhibited forward‐looking behavior, showing significant non‐causal and non‐invertible dynamics. Moreover, we found the residuals to be uncorrelated and independent, with no trace of conditional volatility.
{"title":"Non‐causal and non‐invertible ARMA models: Identification, estimation and application in equity portfolios","authors":"Alain Hecq, Daniel Velasquez‐Gaviria","doi":"10.1111/jtsa.12776","DOIUrl":"https://doi.org/10.1111/jtsa.12776","url":null,"abstract":"The mixed causal‐non‐causal invertible‐non‐invertible autoregressive moving‐average (MARMA) models have the advantage of incorporating roots inside the unit circle, thus adjusting the dynamics of financial returns that depend on future expectations. This article introduces new techniques for estimating, identifying and simulating MARMA models. Although the estimation of the parameters is done using second‐order moments, the identification relies on the existence of high‐order dynamics, captured in the high‐order spectral densities and the correlation of the squared residuals. A comprehensive Monte Carlo study demonstrated the robust performance of our estimation and identification methods. We propose an empirical application to 24 portfolios from emerging markets based on the factors: size, book‐to‐market, profitability, investment and momentum. All portfolios exhibited forward‐looking behavior, showing significant non‐causal and non‐invertible dynamics. Moreover, we found the residuals to be uncorrelated and independent, with no trace of conditional volatility.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"5 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zinsou Max Debaly, Michael H. Neumann, Lionel Truquet
We consider multi‐variate versions of two popular classes of integer‐valued processes. While the transition mechanism is time‐homogeneous, a possible non‐stationarity is introduced by an exogeneous covariate process. We prove absolute regularity (‐mixing) for the count process with exponentially decaying mixing coefficients. The proof of this result makes use of some sort of contraction in the transition mechanism which allows a coupling of two versions of the count process such that they eventually coalesce. We show how this result can be used to prove asymptotic normality of a least squares estimator of an involved model parameter.
{"title":"Mixing properties of non‐stationary multi‐variate count processes","authors":"Zinsou Max Debaly, Michael H. Neumann, Lionel Truquet","doi":"10.1111/jtsa.12775","DOIUrl":"https://doi.org/10.1111/jtsa.12775","url":null,"abstract":"We consider multi‐variate versions of two popular classes of integer‐valued processes. While the transition mechanism is time‐homogeneous, a possible non‐stationarity is introduced by an exogeneous covariate process. We prove absolute regularity (‐mixing) for the count process with exponentially decaying mixing coefficients. The proof of this result makes use of some sort of contraction in the transition mechanism which allows a coupling of two versions of the count process such that they eventually coalesce. We show how this result can be used to prove asymptotic normality of a least squares estimator of an involved model parameter.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"3 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past four decades, research on count time series has made significant progress, but research on ‐valued time series is relatively rare. Existing ‐valued models are mainly of autoregressive structure, where the use of the rounding operator is very natural. Because of the discontinuity of the rounding operator, the formulation of the corresponding model identifiability conditions and the computation of parameter estimators need special attention. It is also difficult to derive closed‐form formulae for crucial stochastic properties. We rediscover a stochastic rounding operator, referred to as mean‐preserving rounding, which overcomes the above drawbacks. Then, a novel class of ‐valued ARMA models based on the new operator is proposed, and the existence of stationary solutions of the models is established. Stochastic properties including closed‐form formulae for (conditional) moments, autocorrelation function, and conditional distributions are obtained. The advantages of our novel model class compared to existing ones are demonstrated. In particular, our model construction avoids identifiability issues such that maximum likelihood estimation is possible. A simulation study is provided, and the appealing performance of the new models is shown by several real‐world data sets.
在过去的四十年里,计数时间序列的研究取得了长足的进步,但对-值时间序列的研究却相对较少。现有的-值模型主要是自回归结构,其中舍入算子的使用非常自然。由于舍入算子的不连续性,相应的模型可识别性条件的制定和参数估计值的计算需要特别注意。此外,也很难推导出关键随机属性的闭式公式。我们重新发现了一种随机舍入算子,称为均值保留舍入,它克服了上述缺点。然后,基于新算子提出了一类新的有值 ARMA 模型,并确定了模型静态解的存在性。研究还获得了随机属性,包括(条件)矩、自相关函数和条件分布的闭式公式。与现有模型相比,我们的新型模型类的优势得到了证明。特别是,我们的模型构建避免了可识别性问题,因此可以进行最大似然估计。我们还提供了一项模拟研究,并通过几个真实世界的数据集展示了新模型的优越性能。
{"title":"Mean‐preserving rounding integer‐valued ARMA models","authors":"Christian H. Weiß, Fukang Zhu","doi":"10.1111/jtsa.12774","DOIUrl":"https://doi.org/10.1111/jtsa.12774","url":null,"abstract":"In the past four decades, research on count time series has made significant progress, but research on ‐valued time series is relatively rare. Existing ‐valued models are mainly of autoregressive structure, where the use of the rounding operator is very natural. Because of the discontinuity of the rounding operator, the formulation of the corresponding model identifiability conditions and the computation of parameter estimators need special attention. It is also difficult to derive closed‐form formulae for crucial stochastic properties. We rediscover a stochastic rounding operator, referred to as mean‐preserving rounding, which overcomes the above drawbacks. Then, a novel class of ‐valued ARMA models based on the new operator is proposed, and the existence of stationary solutions of the models is established. Stochastic properties including closed‐form formulae for (conditional) moments, autocorrelation function, and conditional distributions are obtained. The advantages of our novel model class compared to existing ones are demonstrated. In particular, our model construction avoids identifiability issues such that maximum likelihood estimation is possible. A simulation study is provided, and the appealing performance of the new models is shown by several real‐world data sets.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"91 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João F. Caldeira, Werley C. Cordeiro, Esther Ruiz, André A.P. Santos
In this article, we analyse the forecasting performance of several parametric extensions of the popular Dynamic Nelson–Siegel (DNS) model for the yield curve. Our focus is on the role of additional and time‐varying decay parameters, conditional heteroscedasticity, and macroeconomic variables. We also consider the role of several popular restrictions on the dynamics of the factors. Using a novel dataset of end‐of‐month continuously compounded Treasury yields on US zero‐coupon bonds and frequentist estimation based on the extended Kalman filter, we show that a second decay parameter does not contribute to better forecasts. In concordance with the preferred habitat theory, we also show that the best forecasting model depends on the maturity. For short maturities, the best performance is obtained in a heteroscedastic model with a time‐varying decay parameter. However, for long maturities, neither the time‐varying decay nor the heteroscedasticity plays any role, and the best forecasts are obtained in the basic DNS model with the shape of the yield curve depending on macroeconomic activity. Finally, we find that assuming non‐stationary factors is helpful in forecasting at long horizons.
在本文中,我们分析了针对收益率曲线的流行动态尼尔森-西格尔(Dynamic Nelson-Siegel,DNS)模型的几种参数扩展的预测性能。我们的重点是附加和时变衰减参数、条件异方差和宏观经济变量的作用。我们还考虑了几种流行的因素动态限制的作用。利用美国零息债券月末连续复利国债收益率的新数据集和基于扩展卡尔曼滤波器的频繁估计,我们表明第二个衰减参数无助于获得更好的预测。与首选栖息地理论一致,我们还表明最佳预测模型取决于期限。就短期而言,具有时变衰减参数的异方差模型的性能最佳。然而,对于长期限而言,时变衰减和异方差都不起任何作用,最佳预测是在收益率曲线形状取决于宏观经济活动的 DNS 基本模型中获得的。最后,我们发现假设非稳态因素有助于进行长期预测。
{"title":"Forecasting the yield curve: the role of additional and time‐varying decay parameters, conditional heteroscedasticity, and macro‐economic factors","authors":"João F. Caldeira, Werley C. Cordeiro, Esther Ruiz, André A.P. Santos","doi":"10.1111/jtsa.12769","DOIUrl":"https://doi.org/10.1111/jtsa.12769","url":null,"abstract":"In this article, we analyse the forecasting performance of several parametric extensions of the popular Dynamic Nelson–Siegel (DNS) model for the yield curve. Our focus is on the role of additional and time‐varying decay parameters, conditional heteroscedasticity, and macroeconomic variables. We also consider the role of several popular restrictions on the dynamics of the factors. Using a novel dataset of end‐of‐month continuously compounded Treasury yields on US zero‐coupon bonds and frequentist estimation based on the extended Kalman filter, we show that a second decay parameter does not contribute to better forecasts. In concordance with the preferred habitat theory, we also show that the best forecasting model depends on the maturity. For short maturities, the best performance is obtained in a heteroscedastic model with a time‐varying decay parameter. However, for long maturities, neither the time‐varying decay nor the heteroscedasticity plays any role, and the best forecasts are obtained in the basic DNS model with the shape of the yield curve depending on macroeconomic activity. Finally, we find that assuming non‐stationary factors is helpful in forecasting at long horizons.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"4 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new and flexible class of ARMA‐like (autoregressive moving average) models for nominal or ordinal time series is proposed, which are characterized by using so‐called weighting operators and are, thus, referred to as weighted discrete ARMA (WDARMA) models. By choosing an appropriate type of weighting operator, one can model, for example, nominal time series with negative serial dependencies, or ordinal time series where transitions to neighboring states are more likely than sudden large jumps. Essential stochastic properties of WDARMA models are derived, such as the existence of a stationary, ergodic, and ‐mixing solution as well as closed‐form formulae for marginal and bivariate probabilities. Numerical illustrations as well as simulation experiments regarding the finite‐sample performance of maximum likelihood estimation are presented. The possible benefits of using an appropriate weighting scheme within the WDARMA class are demonstrated by a real‐world data application.
{"title":"Weighted discrete ARMA models for categorical time series","authors":"Christian H. Weiß, Osama Swidan","doi":"10.1111/jtsa.12773","DOIUrl":"https://doi.org/10.1111/jtsa.12773","url":null,"abstract":"A new and flexible class of ARMA‐like (autoregressive moving average) models for nominal or ordinal time series is proposed, which are characterized by using so‐called weighting operators and are, thus, referred to as weighted discrete ARMA (WDARMA) models. By choosing an appropriate type of weighting operator, one can model, for example, nominal time series with negative serial dependencies, or ordinal time series where transitions to neighboring states are more likely than sudden large jumps. Essential stochastic properties of WDARMA models are derived, such as the existence of a stationary, ergodic, and ‐mixing solution as well as closed‐form formulae for marginal and bivariate probabilities. Numerical illustrations as well as simulation experiments regarding the finite‐sample performance of maximum likelihood estimation are presented. The possible benefits of using an appropriate weighting scheme within the WDARMA class are demonstrated by a real‐world data application.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"68 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using ‘working’ assumptions on conditional third and fourth moments of errors, we propose a method of moments estimator that can have improved efficiency over the popular Gaussian quasi‐maximum likelihood estimator (GQMLE). Higher‐order moment assumptions are not needed for consistency – we only require the first two conditional moments to be correctly specified – but the optimal instruments are derived under these assumptions. The working assumptions allow both asymmetry in the distribution of the standardized errors as well as fourth moments that can be smaller or larger than that of the Gaussian distribution. The approach is related to the generalized estimation equations (GEE) approach – which seeks the improvement of estimators of the conditional mean parameters by making working assumptions on the conditional second moments. We derive the asymptotic distribution of the new estimator and show that it does not depend on the estimators of the third and fourth moments. A simulation study shows that the efficiency gains over the GQMLE can be non‐trivial.
{"title":"Improved estimation of dynamic models of conditional means and variances","authors":"Weining Wang, Jeffrey M. Wooldridge, Mengshan Xu","doi":"10.1111/jtsa.12770","DOIUrl":"https://doi.org/10.1111/jtsa.12770","url":null,"abstract":"Using ‘working’ assumptions on conditional third and fourth moments of errors, we propose a method of moments estimator that can have improved efficiency over the popular Gaussian quasi‐maximum likelihood estimator (GQMLE). Higher‐order moment assumptions are not needed for consistency – we only require the first two conditional moments to be correctly specified – but the optimal instruments are derived under these assumptions. The working assumptions allow both asymmetry in the distribution of the standardized errors as well as fourth moments that can be smaller or larger than that of the Gaussian distribution. The approach is related to the generalized estimation equations (GEE) approach – which seeks the improvement of estimators of the conditional mean parameters by making working assumptions on the conditional second moments. We derive the asymptotic distribution of the new estimator and show that it does not depend on the estimators of the third and fourth moments. A simulation study shows that the efficiency gains over the GQMLE can be non‐trivial.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"50 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimating parameters of functional ARMA, GARCH and invertible processes requires estimating lagged covariance and cross‐covariance operators of Cartesian product Hilbert space‐valued processes. Asymptotic results have been derived in recent years, either less generally or under a strict condition. This article derives upper bounds of the estimation errors for such operators based on the mild condition ‐‐approximability for each lag, Cartesian power(s) and sample size, where the two processes can take values in different spaces in the context of lagged cross‐covariance operators. Implications of our results on eigen elements and parameters in functional AR(MA) models are also discussed.
{"title":"Estimating lagged (cross‐)covariance operators of Lp‐m‐approximable processes in cartesian product hilbert spaces","authors":"Sebastian Kühnert","doi":"10.1111/jtsa.12772","DOIUrl":"https://doi.org/10.1111/jtsa.12772","url":null,"abstract":"Estimating parameters of functional ARMA, GARCH and invertible processes requires estimating lagged covariance and cross‐covariance operators of Cartesian product Hilbert space‐valued processes. Asymptotic results have been derived in recent years, either less generally or under a strict condition. This article derives upper bounds of the estimation errors for such operators based on the mild condition ‐‐approximability for each lag, Cartesian power(s) and sample size, where the two processes can take values in different spaces in the context of lagged cross‐covariance operators. Implications of our results on eigen elements and parameters in functional AR(MA) models are also discussed.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"1 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, statistical tests concerning the trend coefficient in cointegrating regressions are addressed for the case when the stochastic regressors have deterministic linear trends. The self‐normalization (SN) approach is adopted for developing inferential methods in the integrated and modified ordinary least squares (IMOLS) estimation framework. Two different self‐normalizers are used to construct the SN test statistics: a functional of the recursive IMOLS estimators and a functional of the IMOLS residuals. These two self‐normalizers produce two SN tests, denoted by and respectively. Neither test requires studentization with a heteroskedasticity and autocorrelation consistent (HAC) estimator. A trimming parameter must be chosen to implement the test, whereas the test does not require any tuning parameter. In the simulation, the test exhibits the smallest size distortion among the inferential methods examined in this article. However, this may come with some loss of power, particularly in small samples.
本文针对随机回归因素具有确定线性趋势的情况,讨论了协整回归中趋势系数的统计检验。在综合修正普通最小二乘法(IMOLS)估计框架中,采用了自归一化(SN)方法来开发推论方法。在构建 SN 检验统计量时使用了两种不同的自归一化器:递归 IMOLS 估计数的函数和 IMOLS 残差的函数。这两个自归一化器产生了两个 SN 检验,分别用 和 表示。这两个检验都不需要使用异方差和自相关一致(HAC)估计器进行学生化。实施该检验必须选择一个微调参数,而该检验不需要任何微调参数。在模拟中,该检验在本文所研究的推断方法中表现出最小的规模失真。然而,这可能会带来一些功率损失,尤其是在小样本中。
{"title":"Self‐normalization inference for linear trends in cointegrating regressions","authors":"Cheol‐Keun Cho","doi":"10.1111/jtsa.12771","DOIUrl":"https://doi.org/10.1111/jtsa.12771","url":null,"abstract":"In this article, statistical tests concerning the trend coefficient in cointegrating regressions are addressed for the case when the stochastic regressors have deterministic linear trends. The self‐normalization (SN) approach is adopted for developing inferential methods in the integrated and modified ordinary least squares (IMOLS) estimation framework. Two different self‐normalizers are used to construct the SN test statistics: a functional of the recursive IMOLS estimators and a functional of the IMOLS residuals. These two self‐normalizers produce two SN tests, denoted by and respectively. Neither test requires studentization with a heteroskedasticity and autocorrelation consistent (HAC) estimator. A trimming parameter must be chosen to implement the test, whereas the test does not require any tuning parameter. In the simulation, the test exhibits the smallest size distortion among the inferential methods examined in this article. However, this may come with some loss of power, particularly in small samples.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"10 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phil Howlett, Brendan K. Beare, Massimo Franchi, John Boland, Konstantin Avrachenkov
We prove an extended Granger–Johansen representation theorem (GJRT) for finite‐ or infinite‐order integrated autoregressive time series on Banach space. We assume only that the resolvent of the autoregressive polynomial for the series is analytic on and inside the unit circle except for an isolated singularity at unity. If the singularity is a pole of finite order the time series is integrated of the same order. If the singularity is an essential singularity the time series is integrated of order infinity. When there is no deterministic forcing the value of the series at each time is the sum of an almost surely convergent stochastic trend, a deterministic term depending on the initial conditions and a finite sum of embedded white noise terms in the prior observations. This is the extended GJRT. In each case the original series is the sum of two separate autoregressive time series on complementary subspaces – a singular component which is integrated of the same order as the original series and a regular component which is not integrated. The extended GJRT applies to all integrated autoregressive processes irrespective of the spatial dimension, the number of stochastic trends and cointegrating relations in the system and the order of integration.
{"title":"The Granger–Johansen representation theorem for integrated time series on Banach space","authors":"Phil Howlett, Brendan K. Beare, Massimo Franchi, John Boland, Konstantin Avrachenkov","doi":"10.1111/jtsa.12766","DOIUrl":"https://doi.org/10.1111/jtsa.12766","url":null,"abstract":"We prove an extended Granger–Johansen representation theorem (GJRT) for finite‐ or infinite‐order integrated autoregressive time series on Banach space. We assume only that the resolvent of the autoregressive polynomial for the series is analytic on and inside the unit circle except for an isolated singularity at unity. If the singularity is a pole of finite order the time series is integrated of the same order. If the singularity is an essential singularity the time series is integrated of order infinity. When there is no deterministic forcing the value of the series at each time is the sum of an almost surely convergent stochastic trend, a deterministic term depending on the initial conditions and a finite sum of embedded white noise terms in the prior observations. This is the extended GJRT. In each case the original series is the sum of two separate autoregressive time series on complementary subspaces – a singular component which is integrated of the same order as the original series and a regular component which is not integrated. The extended GJRT applies to all integrated autoregressive processes irrespective of the spatial dimension, the number of stochastic trends and cointegrating relations in the system and the order of integration.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"9 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analyzing the covariance structure of data is a fundamental task of statistics. While this task is simple for low‐dimensional observations, it becomes challenging for more intricate objects, such as multi‐variate functions. Here, the covariance can be so complex that just saving a non‐parametric estimate is impractical and structural assumptions are necessary to tame the model. One popular assumption for space‐time data is separability of the covariance into purely spatial and temporal factors. In this article, we present a new test for separability in the context of dependent functional time series. While most of the related work studies functional data in a Hilbert space of square integrable functions, we model the observations as objects in the space of continuous functions equipped with the supremum norm. We argue that this (mathematically challenging) setup enhances interpretability for users and is more in line with practical preprocessing. Our test statistic measures the maximal deviation between the estimated covariance kernel and a separable approximation. Critical values are obtained by a non‐standard multiplier bootstrap for dependent data. We prove the statistical validity of our approach and demonstrate its practicability in a simulation study and a data example.
{"title":"Testing covariance separability for continuous functional data","authors":"Holger Dette, Gauthier Dierickx, Tim Kutta","doi":"10.1111/jtsa.12764","DOIUrl":"https://doi.org/10.1111/jtsa.12764","url":null,"abstract":"Analyzing the covariance structure of data is a fundamental task of statistics. While this task is simple for low‐dimensional observations, it becomes challenging for more intricate objects, such as multi‐variate functions. Here, the covariance can be so complex that just saving a non‐parametric estimate is impractical and structural assumptions are necessary to tame the model. One popular assumption for space‐time data is separability of the covariance into purely spatial and temporal factors. In this article, we present a new test for separability in the context of dependent functional time series. While most of the related work studies functional data in a Hilbert space of square integrable functions, we model the observations as objects in the space of continuous functions equipped with the supremum norm. We argue that this (mathematically challenging) setup enhances interpretability for users and is more in line with practical preprocessing. Our test statistic measures the maximal deviation between the estimated covariance kernel and a separable approximation. Critical values are obtained by a non‐standard multiplier bootstrap for dependent data. We prove the statistical validity of our approach and demonstrate its practicability in a simulation study and a data example.","PeriodicalId":49973,"journal":{"name":"Journal of Time Series Analysis","volume":"8 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}