Pub Date : 2024-01-20DOI: 10.1016/j.jmva.2024.105296
Haoxiang Li, Qian Qin, Galin L. Jones
Gaussian mixtures are commonly used for modeling heavy-tailed error distributions in robust linear regression. Combining the likelihood of a multivariate robust linear regression model with a standard improper prior distribution yields an analytically intractable posterior distribution that can be sampled using a data augmentation algorithm. When the response matrix has missing entries, there are unique challenges to the application and analysis of the convergence properties of the algorithm. Conditions for geometric ergodicity are provided when the incomplete data have a “monotone” structure. In the absence of a monotone structure, an intermediate imputation step is necessary for implementing the algorithm. In this case, we provide sufficient conditions for the algorithm to be Harris ergodic. Finally, we show that, when there is a monotone structure and intermediate imputation is unnecessary, intermediate imputation slows the convergence of the underlying Monte Carlo Markov chain, while post hoc imputation does not. An R package for the data augmentation algorithm is provided.
高斯混合物通常用于对稳健线性回归中的重尾误差分布建模。将多元稳健线性回归模型的似然与标准不恰当先验分布相结合,会产生一个难以分析的后验分布,可以使用数据增强算法进行采样。当响应矩阵有缺失项时,算法收敛特性的应用和分析就会面临独特的挑战。当不完整数据具有 "单调 "结构时,就会提供几何遍历性条件。在不存在单调结构的情况下,实施算法需要一个中间估算步骤。在这种情况下,我们提供了算法具有哈里斯遍历性的充分条件。最后,我们证明,当存在单调结构且中间估算不需要时,中间估算会减慢底层蒙特卡罗马尔科夫链的收敛速度,而事后估算则不会。我们还提供了一个用于数据增强算法的 R 软件包。
{"title":"Convergence analysis of data augmentation algorithms for Bayesian robust multivariate linear regression with incomplete data","authors":"Haoxiang Li, Qian Qin, Galin L. Jones","doi":"10.1016/j.jmva.2024.105296","DOIUrl":"10.1016/j.jmva.2024.105296","url":null,"abstract":"<div><p><span>Gaussian mixtures are commonly used for modeling heavy-tailed error distributions in robust linear regression. Combining the likelihood of a multivariate robust linear regression model with a standard improper prior distribution yields an analytically intractable posterior distribution<span> that can be sampled using a data augmentation algorithm. When the response matrix has missing entries, there are unique challenges to the application and analysis of the convergence properties of the algorithm. Conditions for geometric </span></span>ergodicity<span> are provided when the incomplete data have a “monotone” structure. In the absence of a monotone structure, an intermediate imputation step is necessary for implementing the algorithm. In this case, we provide sufficient conditions for the algorithm to be Harris ergodic. Finally, we show that, when there is a monotone structure and intermediate imputation is unnecessary, intermediate imputation slows the convergence of the underlying Monte Carlo Markov chain, while post hoc imputation does not. An R package for the data augmentation algorithm is provided.</span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105296"},"PeriodicalIF":1.6,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-13DOI: 10.1016/j.jmva.2024.105295
Helmut Finner , Markus Roters
We show that positively associated squared (and absolute-valued) multivariate normally distributed random vectors need not be multivariate totally positive of order 2 (MTP2) for . This result disproves Theorem 1 in Eisenbaum (2014, Ann. Probab.) and the conjecture that positive association of squared multivariate normals is equivalent to MTP2 and infinite divisibility of squared multivariate normals. Among others, we show that there exist absolute-valued multivariate normals which are conditionally increasing in sequence (CIS) (or weakly CIS (WCIS)) and hence positively associated but not MTP2. Moreover, we show that there exist absolute-valued multivariate normals which are positively associated but not CIS. As a by-product, we obtain necessary conditions for CIS and WCIS of absolute normals. We illustrate these conditions in some examples. With respect to implications and applications of our results, we show PA beyond MTP2 for some related multivariate distributions (chi-square, , skew normal) and refer to possible conservative multiple test procedures and conservative simultaneous confidence bounds. Finally, we obtain the validity of the strong form of Gaussian product inequalities beyond MTP2.
{"title":"On positive association of absolute-valued and squared multivariate Gaussians beyond MTP2","authors":"Helmut Finner , Markus Roters","doi":"10.1016/j.jmva.2024.105295","DOIUrl":"10.1016/j.jmva.2024.105295","url":null,"abstract":"<div><p>We show that positively associated squared (and absolute-valued) multivariate normally distributed random vectors need not be multivariate totally positive of order 2 (MTP<sub>2</sub>) for <span><math><mrow><mi>p</mi><mo>≥</mo><mn>3</mn></mrow></math></span>. This result disproves Theorem 1 in Eisenbaum (2014, Ann. Probab.) and the conjecture that positive association of squared multivariate normals is equivalent to MTP<sub>2</sub> and infinite divisibility of squared multivariate normals. Among others, we show that there exist absolute-valued multivariate normals which are conditionally increasing in sequence (CIS) (or weakly CIS (WCIS)) and hence positively associated but not MTP<sub>2</sub>. Moreover, we show that there exist absolute-valued multivariate normals which are positively associated but not CIS. As a by-product, we obtain necessary conditions for CIS and WCIS of absolute normals. We illustrate these conditions in some examples. With respect to implications and applications of our results, we show PA beyond MTP<sub>2</sub> for some related multivariate distributions (chi-square, <span><math><mi>t</mi></math></span>, skew normal) and refer to possible conservative multiple test procedures and conservative simultaneous confidence bounds. Finally, we obtain the validity of the strong form of Gaussian product inequalities beyond MTP<sub>2</sub>.</p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105295"},"PeriodicalIF":1.6,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0047259X24000022/pdfft?md5=057f93b4d2894763a0a0d8fd83de8805&pid=1-s2.0-S0047259X24000022-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139461529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.1016/j.jmva.2024.105294
Xin Wang , Lingchen Kong , Liqun Wang
Estimation of high-dimensional sparse covariance matrix is one of the fundamental and important problems in multivariate analysis and has a wide range of applications in many fields. This paper presents a novel method for sparse covariance matrix estimation via solving a non-convex regularization optimization problem. We establish the asymptotic properties of the proposed estimator and develop a multi-stage convex relaxation method to find an effective estimator. The multi-stage convex relaxation method guarantees any accumulation point of the sequence generated is a first-order stationary point of the non-convex optimization. Moreover, the error bounds of the first two stage estimators of the multi-stage convex relaxation method are derived under some regularity conditions. The numerical results show that our estimator outperforms the state-of-the-art estimators and has a high degree of sparsity on the premise of its effectiveness.
{"title":"Estimation of sparse covariance matrix via non-convex regularization","authors":"Xin Wang , Lingchen Kong , Liqun Wang","doi":"10.1016/j.jmva.2024.105294","DOIUrl":"10.1016/j.jmva.2024.105294","url":null,"abstract":"<div><p>Estimation of high-dimensional sparse covariance matrix is one of the fundamental and important problems in multivariate analysis and has a wide range of applications in many fields. This paper presents a novel method for sparse covariance matrix estimation via solving a non-convex regularization optimization problem. We establish the asymptotic properties of the proposed estimator and develop a multi-stage convex relaxation method to find an effective estimator. The multi-stage convex relaxation method guarantees any accumulation point of the sequence generated is a first-order stationary point of the non-convex optimization. Moreover, the error bounds of the first two stage estimators of the multi-stage convex relaxation method are derived under some regularity conditions. The numerical results show that our estimator outperforms the state-of-the-art estimators and has a high degree of sparsity on the premise of its effectiveness.</p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105294"},"PeriodicalIF":1.6,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139101885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-28DOI: 10.1016/j.jmva.2023.105290
Shin-ichi Tsukada
In this study, we focus on the critical issue of analyzing data sets with missing data. Statistically processing such data sets, particularly those with general missing data, is challenging to express in explicit formulae, and often requires computational algorithms to solve. We specifically address monotone missing data, which are the simplest form of data sets with missing data. We conduct hypothesis tests to determine the equivalence of mean vectors and covariance matrices across different populations. Furthermore, we derive the properties of likelihood ratio test statistics in scenarios involving large samples and large dimensions.
{"title":"Hypothesis testing for mean vector and covariance matrix of multi-populations under a two-step monotone incomplete sample in large sample and dimension","authors":"Shin-ichi Tsukada","doi":"10.1016/j.jmva.2023.105290","DOIUrl":"10.1016/j.jmva.2023.105290","url":null,"abstract":"<div><p><span>In this study, we focus on the critical issue of analyzing data sets with missing data. Statistically processing such data sets, particularly those with general missing data, is challenging to express in explicit formulae, and often requires computational algorithms to solve. We specifically address monotone missing data, which are the simplest form of data sets with missing data. We conduct hypothesis tests to determine the equivalence of mean vectors and covariance matrices across different populations. Furthermore, we derive the properties of </span>likelihood ratio test statistics in scenarios involving large samples and large dimensions.</p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105290"},"PeriodicalIF":1.6,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139068375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-23DOI: 10.1016/j.jmva.2023.105292
Elena Di Bernardino , Thomas Laloë , Cambyse Pakzad
The present article is devoted to the semi-parametric estimation of multivariate expectiles for extreme levels. The considered multivariate risk measures also include the possible conditioning with respect to a functional covariate, belonging to an infinite-dimensional space. By using the first order optimality condition, we interpret these expectiles as solutions of a multidimensional nonlinear optimum problem. Then the inference is based on a minimization algorithm of gradient descent type, coupled with consistent kernel estimations of our key statistical quantities such as conditional quantiles, conditional tail index and conditional tail dependence functions. The method is valid for equivalently heavy-tailed marginals and under a multivariate regular variation condition on the underlying unknown random vector with arbitrary dependence structure. Our main result establishes the consistency in probability of the optimum approximated solution vectors with a speed rate. This allows us to estimate the global computational cost of the whole procedure according to the data sample size. The finite-sample performance of our methodology is provided via a numerical illustration of simulated datasets.
{"title":"Estimation of extreme multivariate expectiles with functional covariates","authors":"Elena Di Bernardino , Thomas Laloë , Cambyse Pakzad","doi":"10.1016/j.jmva.2023.105292","DOIUrl":"10.1016/j.jmva.2023.105292","url":null,"abstract":"<div><p><span>The present article is devoted to the semi-parametric estimation of multivariate expectiles for extreme levels. The considered multivariate risk measures also include the possible conditioning with respect to a functional covariate<span><span>, belonging to an infinite-dimensional space. By using the first order optimality condition, we interpret these expectiles as solutions of a multidimensional nonlinear optimum problem. Then the inference is based on a minimization algorithm of gradient descent type, coupled with consistent kernel estimations of our key statistical quantities such as conditional </span>quantiles, conditional </span></span>tail index<span><span> and conditional tail dependence functions. The method is valid for equivalently heavy-tailed marginals and under a multivariate regular variation condition on the underlying unknown random vector with arbitrary dependence structure. Our main result establishes the consistency in </span>probability<span> of the optimum approximated solution vectors with a speed rate. This allows us to estimate the global computational cost of the whole procedure according to the data sample size. The finite-sample performance of our methodology is provided via a numerical illustration of simulated datasets.</span></span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105292"},"PeriodicalIF":1.6,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139031522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.1016/j.jmva.2023.105293
Terence Kevin Manfoumbi Djonguet, Guy Martial Nkiet
We propose an independence test for random variables valued into metric spaces by using a test statistic obtained from appropriately centering and rescaling the squared Hilbert–Schmidt norm of the usual empirical estimator of normalized cross-covariance operator. We then get asymptotic normality of this statistic under independence hypothesis, so leading to a new test for independence of functional random variables. A simulation study that allows to compare the proposed test to existing ones is provided.
{"title":"An independence test for functional variables based on kernel normalized cross-covariance operator","authors":"Terence Kevin Manfoumbi Djonguet, Guy Martial Nkiet","doi":"10.1016/j.jmva.2023.105293","DOIUrl":"10.1016/j.jmva.2023.105293","url":null,"abstract":"<div><p>We propose an independence test for random variables valued into metric spaces by using a test statistic<span> obtained from appropriately centering and rescaling the squared Hilbert–Schmidt norm of the usual empirical estimator of normalized cross-covariance operator. We then get asymptotic normality of this statistic under independence hypothesis, so leading to a new test for independence of functional random variables. A simulation study that allows to compare the proposed test to existing ones is provided.</span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105293"},"PeriodicalIF":1.6,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139031498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1016/j.jmva.2023.105282
Yujie Xue , Masanobu Taniguchi , Tong Liu
The least squares estimator (LSE) seems a natural estimator of linear regression models. Whereas, if the dimension of the vector of regression coefficients is greater than 1 and the residuals are dependent, the best linear unbiased estimator (BLUE), which includes the information of the covariance matrix of residual process has a better performance than LSE in the sense of mean square error. As we know the unbiased estimators are generally inadmissible, Senda and Taniguchi (2006) introduced a James–Stein type shrinkage estimator for the regression coefficients based on LSE, where the residual process is a Gaussian stationary process, and provides sufficient conditions such that the James–Stein type shrinkage estimator improves LSE. In this paper, we propose a shrinkage estimator based on BLUE. Sufficient conditions for this shrinkage estimator to improve BLUE are also given. Furthermore, since is infeasible, assuming that has a form of , we introduce a feasible version of that shrinkage estimator with replacing by which is introduced in Toyooka (1986). Additionally, we give the sufficient conditions where the feasible version improves BLUE. Besides, the results of a numerical studies confirm our approach.
{"title":"Shrinkage estimators of BLUE for time series regression models","authors":"Yujie Xue , Masanobu Taniguchi , Tong Liu","doi":"10.1016/j.jmva.2023.105282","DOIUrl":"10.1016/j.jmva.2023.105282","url":null,"abstract":"<div><p><span>The least squares estimator<span><span> (LSE) seems a natural estimator of linear regression models. Whereas, if the dimension of the vector of regression coefficients is greater than 1 and the residuals are dependent, the best </span>linear unbiased estimator<span> (BLUE), which includes the information of the covariance matrix </span></span></span><span><math><mi>Γ</mi></math></span><span><span> of residual process has a better performance than LSE in the sense of mean square error. As we know the </span>unbiased estimators<span> are generally inadmissible, Senda and Taniguchi (2006) introduced a James–Stein type shrinkage estimator for the regression coefficients based on LSE, where the residual process is a Gaussian stationary process, and provides sufficient conditions such that the James–Stein type shrinkage estimator improves LSE. In this paper, we propose a shrinkage estimator based on BLUE. Sufficient conditions for this shrinkage estimator to improve BLUE are also given. Furthermore, since </span></span><span><math><mi>Γ</mi></math></span> is infeasible, assuming that <span><math><mi>Γ</mi></math></span> has a form of <span><math><mrow><mi>Γ</mi><mo>=</mo><mi>Γ</mi><mrow><mo>(</mo><mi>θ</mi><mo>)</mo></mrow></mrow></math></span>, we introduce a feasible version of that shrinkage estimator with replacing <span><math><mrow><mi>Γ</mi><mrow><mo>(</mo><mi>θ</mi><mo>)</mo></mrow></mrow></math></span> by <span><math><mrow><mi>Γ</mi><mrow><mo>(</mo><mover><mrow><mi>θ</mi></mrow><mrow><mo>ˆ</mo></mrow></mover><mo>)</mo></mrow></mrow></math></span><span> which is introduced in Toyooka (1986). Additionally, we give the sufficient conditions where the feasible version improves BLUE. Besides, the results of a numerical studies confirm our approach.</span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105282"},"PeriodicalIF":1.6,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are concerned with the nonparametric estimation of the expectile functional regression. More precisely, we build an estimator, by the local linear smoothing approach, of the conditional expectile. Then we establish the asymptotic distribution of the constructed estimator. Establishing this result requires the Bahadur representation of the conditional expectile. The latter is obtained under certain standard conditions which cover the functional aspect of the data as well as the nonparametric characteristic of the model. The real impact of this result in nonparametric functional statistics has been discussed and highlighted using artificial data.
{"title":"Asymptotic normality of the local linear estimator of the functional expectile regression","authors":"Ouahiba Litimein , Ali Laksaci , Larbi Ait-Hennani , Boubaker Mechab , Mustapha Rachdi","doi":"10.1016/j.jmva.2023.105281","DOIUrl":"10.1016/j.jmva.2023.105281","url":null,"abstract":"<div><p><span>We are concerned with the nonparametric estimation of the expectile functional regression. More precisely, we build an estimator, by the local linear smoothing approach, of the conditional expectile. Then we establish the </span>asymptotic distribution<span> of the constructed estimator. Establishing this result requires the Bahadur representation of the conditional expectile. The latter is obtained under certain standard conditions which cover the functional aspect of the data as well as the nonparametric characteristic of the model. The real impact of this result in nonparametric functional statistics has been discussed and highlighted using artificial data.</span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"202 ","pages":"Article 105281"},"PeriodicalIF":1.6,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138493515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-02DOI: 10.1016/j.jmva.2023.105280
Christian Genest , Ostap Okhrin , Taras Bodnar
{"title":"Preface to the Special Issue “Copula modeling from Abe Sklar to the present day”","authors":"Christian Genest , Ostap Okhrin , Taras Bodnar","doi":"10.1016/j.jmva.2023.105280","DOIUrl":"10.1016/j.jmva.2023.105280","url":null,"abstract":"","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"201 ","pages":"Article 105280"},"PeriodicalIF":1.6,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138493514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.jmva.2023.105271
Yici Chen, Tomonari Sei
Two-dimensional distributions whose marginal distributions are uniform are called bivariate copulas. Among them, the one that satisfies given constraints on expectation and is closest to being an independent distribution in the sense of Kullback–Leibler divergence is called the minimum information bivariate copula. The density function of the minimum information copula contains a set of functions called the normalizing functions, which are often difficult to compute. Although a number of proper scoring rules for probability distributions having normalizing constants such as exponential families have been proposed, these scores are not applicable to the minimum information copulas due to the normalizing functions. In this paper, we propose the conditional Kullback–Leibler score, which avoids computation of the normalizing functions. The main idea of its construction is to use pairs of observations. We show that the proposed score is strictly proper in the space of copula density functions and therefore the estimator derived from it has asymptotic consistency. Furthermore, the score is convex with respect to the parameters and can be easily optimized by the gradient methods.
{"title":"A proper scoring rule for minimum information bivariate copulas","authors":"Yici Chen, Tomonari Sei","doi":"10.1016/j.jmva.2023.105271","DOIUrl":"10.1016/j.jmva.2023.105271","url":null,"abstract":"<div><p><span>Two-dimensional distributions whose marginal distributions are uniform are called bivariate </span>copulas<span>. Among them, the one that satisfies given constraints on expectation and is closest to being an independent distribution in the sense of Kullback–Leibler divergence is called the minimum information bivariate copula. The density function of the minimum information copula contains a set of functions called the normalizing functions, which are often difficult to compute. Although a number of proper scoring rules for probability distributions having normalizing constants such as exponential families have been proposed, these scores are not applicable to the minimum information copulas due to the normalizing functions. In this paper, we propose the conditional Kullback–Leibler score, which avoids computation of the normalizing functions. The main idea of its construction is to use pairs of observations. We show that the proposed score is strictly proper in the space of copula density functions and therefore the estimator derived from it has asymptotic consistency. Furthermore, the score is convex with respect to the parameters and can be easily optimized by the gradient methods.</span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"201 ","pages":"Article 105271"},"PeriodicalIF":1.6,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}