Pub Date : 2025-01-30DOI: 10.1016/j.jmva.2025.105419
Daisuke Matsuno , Kanta Naito
This paper examines semiparametric density estimation by combining a parametric crude guess and its nonparametric adjustment. The nonparametric adjustment is implemented via minimization of the localized Bregman divergence, which yields a broad class of semiparametric density estimators. Asymptotic theories of the density estimators in this general class are developed. Specific concrete forms of density estimators under a certain divergence and parametric guess are calculated. Simulations for several target densities and application to a real data set reveal that the proposed density estimators offer competitive or, in some cases, better performance compared to fully nonparametric kernel density estimator.
{"title":"Semiparametric density estimation with localized Bregman divergence","authors":"Daisuke Matsuno , Kanta Naito","doi":"10.1016/j.jmva.2025.105419","DOIUrl":"10.1016/j.jmva.2025.105419","url":null,"abstract":"<div><div>This paper examines semiparametric density estimation by combining a parametric crude guess and its nonparametric adjustment. The nonparametric adjustment is implemented via minimization of the localized Bregman divergence, which yields a broad class of semiparametric density estimators. Asymptotic theories of the density estimators in this general class are developed. Specific concrete forms of density estimators under a certain divergence and parametric guess are calculated. Simulations for several target densities and application to a real data set reveal that the proposed density estimators offer competitive or, in some cases, better performance compared to fully nonparametric kernel density estimator.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105419"},"PeriodicalIF":1.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143133557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1016/j.jmva.2025.105418
Benjamin Côté, Hélène Cossette, Etienne Marceau
A new family of tree-structured Markov random fields for a vector of discrete counting random variables is introduced. According to the characteristics of the family, the marginal distributions of the Markov random fields are all Poisson with the same mean, and are untied from the strength or structure of their built-in dependence. This key feature is uncommon for Markov random fields and most convenient for applications purposes. The specific properties of this new family confer a straightforward sampling procedure and analytic expressions for the joint probability mass function and the joint probability generating function of the vector of counting random variables, thus granting computational methods that scale well to vectors of high dimension. We study the distribution of the sum of random variables constituting a Markov random field from the proposed family, analyze a random variable’s individual contribution to that sum through expected allocations, and establish stochastic orderings to assess a wide understanding of their behavior.
{"title":"Tree-structured Markov random fields with Poisson marginal distributions","authors":"Benjamin Côté, Hélène Cossette, Etienne Marceau","doi":"10.1016/j.jmva.2025.105418","DOIUrl":"10.1016/j.jmva.2025.105418","url":null,"abstract":"<div><div>A new family of tree-structured Markov random fields for a vector of discrete counting random variables is introduced. According to the characteristics of the family, the marginal distributions of the Markov random fields are all Poisson with the same mean, and are untied from the strength or structure of their built-in dependence. This key feature is uncommon for Markov random fields and most convenient for applications purposes. The specific properties of this new family confer a straightforward sampling procedure and analytic expressions for the joint probability mass function and the joint probability generating function of the vector of counting random variables, thus granting computational methods that scale well to vectors of high dimension. We study the distribution of the sum of random variables constituting a Markov random field from the proposed family, analyze a random variable’s individual contribution to that sum through expected allocations, and establish stochastic orderings to assess a wide understanding of their behavior.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105418"},"PeriodicalIF":1.4,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-25DOI: 10.1016/j.jmva.2025.105416
Daisuke Kurisu , Taisuke Otsu
Non-Euclidean complex data analysis becomes increasingly popular in various fields of data science. In a seminal paper, Petersen and Müller (2019) generalized the notion of regression analysis to non-Euclidean response objects. Meanwhile, in the conventional regression analysis, model averaging has a long history and is widely applied in statistics literature. This paper studies the problem of optimal prediction for non-Euclidean objects by extending the method of model averaging. In particular, we generalize the notion of model averaging for global Fréchet regressions and establish an optimal property of the cross-validation to select the averaging weights in terms of the final prediction error. A simulation study illustrates excellent out-of-sample predictions of the proposed method.
{"title":"Model averaging for global Fréchet regression","authors":"Daisuke Kurisu , Taisuke Otsu","doi":"10.1016/j.jmva.2025.105416","DOIUrl":"10.1016/j.jmva.2025.105416","url":null,"abstract":"<div><div>Non-Euclidean complex data analysis becomes increasingly popular in various fields of data science. In a seminal paper, Petersen and Müller (2019) generalized the notion of regression analysis to non-Euclidean response objects. Meanwhile, in the conventional regression analysis, model averaging has a long history and is widely applied in statistics literature. This paper studies the problem of optimal prediction for non-Euclidean objects by extending the method of model averaging. In particular, we generalize the notion of model averaging for global Fréchet regressions and establish an optimal property of the cross-validation to select the averaging weights in terms of the final prediction error. A simulation study illustrates excellent out-of-sample predictions of the proposed method.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105416"},"PeriodicalIF":1.4,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143133558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel semiparametric classifier based on Mahalanobis distances of an observation from the competing classes. Our tool is a generalized additive model with the logistic link function that uses these distances as features to estimate the posterior probabilities of different classes. While popular parametric classifiers like linear and quadratic discriminant analyses are mainly motivated by the normality of the underlying distributions, the proposed classifier is more flexible and free from such parametric modeling assumptions. Since the densities of elliptic distributions are functions of Mahalanobis distances, this classifier works well when the competing classes are (nearly) elliptic. In such cases, it often outperforms popular nonparametric classifiers, especially when the sample size is small compared to the dimension of the data. To cope with non-elliptic and possibly multimodal distributions, we propose a local version of the Mahalanobis distance. Subsequently, we propose another classifier based on a generalized additive model that uses the local Mahalanobis distances as features. This nonparametric classifier usually performs like the Mahalanobis distance based semiparametric classifier when the underlying distributions are elliptic, but outperforms it for several non-elliptic and multimodal distributions. We also investigate the behavior of these two classifiers in high dimension, low sample size situations. A thorough numerical study involving several simulated and real datasets demonstrate the usefulness of the proposed classifiers in comparison to many state-of-the-art methods.
{"title":"Classification using global and local Mahalanobis distances","authors":"Annesha Ghosh , Anil K. Ghosh , Rita SahaRay , Soham Sarkar","doi":"10.1016/j.jmva.2025.105417","DOIUrl":"10.1016/j.jmva.2025.105417","url":null,"abstract":"<div><div>We propose a novel semiparametric classifier based on Mahalanobis distances of an observation from the competing classes. Our tool is a generalized additive model with the logistic link function that uses these distances as features to estimate the posterior probabilities of different classes. While popular parametric classifiers like linear and quadratic discriminant analyses are mainly motivated by the normality of the underlying distributions, the proposed classifier is more flexible and free from such parametric modeling assumptions. Since the densities of elliptic distributions are functions of Mahalanobis distances, this classifier works well when the competing classes are (nearly) elliptic. In such cases, it often outperforms popular nonparametric classifiers, especially when the sample size is small compared to the dimension of the data. To cope with non-elliptic and possibly multimodal distributions, we propose a local version of the Mahalanobis distance. Subsequently, we propose another classifier based on a generalized additive model that uses the local Mahalanobis distances as features. This nonparametric classifier usually performs like the Mahalanobis distance based semiparametric classifier when the underlying distributions are elliptic, but outperforms it for several non-elliptic and multimodal distributions. We also investigate the behavior of these two classifiers in high dimension, low sample size situations. A thorough numerical study involving several simulated and real datasets demonstrate the usefulness of the proposed classifiers in comparison to many state-of-the-art methods.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105417"},"PeriodicalIF":1.4,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1016/j.jmva.2025.105406
Davide Giraudo
In this paper, we establish an exponential inequality for -statistics of i.i.d. data, varying kernel and taking values in a separable Hilbert space. The bound is expressed as a sum of an exponential term plus an other one involving the tail of a sum of squared norms. We start by the degenerate case. Then we provide applications to -statistics of not necessarily degenerate fixed kernel, incomplete -statistics and weighted -statistics.
{"title":"An exponential inequality for Hilbert-valued U-statistics of i.i.d. data","authors":"Davide Giraudo","doi":"10.1016/j.jmva.2025.105406","DOIUrl":"10.1016/j.jmva.2025.105406","url":null,"abstract":"<div><div>In this paper, we establish an exponential inequality for <span><math><mi>U</mi></math></span>-statistics of i.i.d. data, varying kernel and taking values in a separable Hilbert space. The bound is expressed as a sum of an exponential term plus an other one involving the tail of a sum of squared norms. We start by the degenerate case. Then we provide applications to <span><math><mi>U</mi></math></span>-statistics of not necessarily degenerate fixed kernel, incomplete <span><math><mi>U</mi></math></span>-statistics and weighted <span><math><mi>U</mi></math></span>-statistics.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105406"},"PeriodicalIF":1.4,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143133559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.1016/j.jmva.2024.105404
Kanti V. Mardia
It is not an exaggeration to say that R.A. Fisher is the Albert Einstein of Statistics. He pioneered almost all the main branches of statistics, but it is not as well known that he opened the area of Directional Statistics with his 1953 paper introducing a distribution on the sphere which is now known as the Fisher distribution. He stressed that for spherical data one should take into account that the data is on a manifold. We will describe this Fisher distribution and reanalyze his geological data. We also comment on the two goals he set himself in that paper, and on how he reinvented the von Mises distribution on the circle. Since then, many extensions of this distribution have appeared bearing Fisher’s name such as the von Mises–Fisher distribution and the matrix Fisher distribution. In fact, the subject of Directional Statistics has grown tremendously in the last two decades with new applications emerging in life sciences, image analysis, machine learning and so on. We give a recent new method of constructing the Fisher type distributions on manifolds which has been motivated by some problems in machine learning. The number of directional distributions has increased since then, including the bivariate von Mises distribution and we describe its connection to work resulting in the 2024 Nobel-winning AlphaFold (in Chemistry). Further, the subject has evolved as Statistics on Manifolds which also includes the new field of Shape Analysis, and finally, we end with a historical note pointing out some correspondence between D’Arcy Thompson and R.A. Fisher related to Shape Analysis.
{"title":"Fisher’s legacy of directional statistics, and beyond to statistics on manifolds","authors":"Kanti V. Mardia","doi":"10.1016/j.jmva.2024.105404","DOIUrl":"10.1016/j.jmva.2024.105404","url":null,"abstract":"<div><div>It is not an exaggeration to say that R.A. Fisher is the Albert Einstein of Statistics. He pioneered almost all the main branches of statistics, but it is not as well known that he opened the area of Directional Statistics with his 1953 paper introducing a distribution on the sphere which is now known as the Fisher distribution. He stressed that for spherical data one should take into account that the data is on a manifold. We will describe this Fisher distribution and reanalyze his geological data. We also comment on the two goals he set himself in that paper, and on how he reinvented the von Mises distribution on the circle. Since then, many extensions of this distribution have appeared bearing Fisher’s name such as the von Mises–Fisher distribution and the matrix Fisher distribution. In fact, the subject of Directional Statistics has grown tremendously in the last two decades with new applications emerging in life sciences, image analysis, machine learning and so on. We give a recent new method of constructing the Fisher type distributions on manifolds which has been motivated by some problems in machine learning. The number of directional distributions has increased since then, including the bivariate von Mises distribution and we describe its connection to work resulting in the 2024 Nobel-winning AlphaFold (in Chemistry). Further, the subject has evolved as Statistics on Manifolds which also includes the new field of Shape Analysis, and finally, we end with a historical note pointing out some correspondence between D’Arcy Thompson and R.A. Fisher related to Shape Analysis.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105404"},"PeriodicalIF":1.4,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.1016/j.jmva.2024.105405
Lei Shu , Yifan Hao , Yu Chen , Qing Yang
Achieving robust forecasts for a single time series with many covariates and possible nonlinear effects is a problem worth investigating. In this paper, a scaled factor-augmented quantile regression with aggregation (SFQRA) method is proposed for an effective prediction. It first estimates different conditional quantiles by introducing scaled covariates to the factor-augmented quantile regression, which not only combats the curse of dimensionality but also includes the target information in the estimation. Then the different conditional quantiles are aggregated appropriately to a robust forecast. Moreover, combining SFQRA with feature screening via an aggregated quantile correlation allows it to be extended to handle cases when only a portion of covariates is informative. The effectiveness of the proposed methods is justified theoretically, under the framework of large cross-sections and large time dimensions while no restriction is imposed on the relation between them. Various simulation studies and real data analyses demonstrate the superiority of the newly proposed method in forecasting.
{"title":"SFQRA: Scaled factor-augmented quantile regression with aggregation in conditional mean forecasting","authors":"Lei Shu , Yifan Hao , Yu Chen , Qing Yang","doi":"10.1016/j.jmva.2024.105405","DOIUrl":"10.1016/j.jmva.2024.105405","url":null,"abstract":"<div><div>Achieving robust forecasts for a single time series with many covariates and possible nonlinear effects is a problem worth investigating. In this paper, a scaled factor-augmented quantile regression with aggregation (SFQRA) method is proposed for an effective prediction. It first estimates different conditional quantiles by introducing scaled covariates to the factor-augmented quantile regression, which not only combats the curse of dimensionality but also includes the target information in the estimation. Then the different conditional quantiles are aggregated appropriately to a robust forecast. Moreover, combining SFQRA with feature screening via an aggregated quantile correlation allows it to be extended to handle cases when only a portion of covariates is informative. The effectiveness of the proposed methods is justified theoretically, under the framework of large cross-sections and large time dimensions while no restriction is imposed on the relation between them. Various simulation studies and real data analyses demonstrate the superiority of the newly proposed method in forecasting.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105405"},"PeriodicalIF":1.4,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.jmva.2024.105402
Tao Wang
We propose estimating semi-functional varying coefficient regression based on the mode value through a kernel objective function, where the bandwidth included is treated as a tuning parameter to achieve efficiency and robustness. For estimation, functional principal component basis functions are utilized to approximate the slope function and functional predictor variable, while B-spline functions are employed to approximate the varying coefficient component. Under mild regularity conditions, the convergence rates of the resulting estimators for the unknown slope function and varying coefficient are established under various cases. To numerically estimate the proposed model, we recommend employing a computationally efficient mode expectation–maximization algorithm with the aid of a Gaussian kernel. The tuning parameters are selected using the mode-based Bayesian information criterion and cross-validation procedures. Built upon the generalized likelihood technique, we further develop a goodness-of-fit test to assess the constancy of varying coefficient functions and put forward a wild bootstrap procedure for estimating the corresponding critical values. The finite sample performance of the developed estimators is illustrated through Monte Carlo simulations and real data analysis related to the Tecator data. The results produced by the propounded method are compared favorably with those obtained from alternative estimation techniques.
{"title":"Semi-functional varying coefficient mode-based regression","authors":"Tao Wang","doi":"10.1016/j.jmva.2024.105402","DOIUrl":"10.1016/j.jmva.2024.105402","url":null,"abstract":"<div><div>We propose estimating semi-functional varying coefficient regression based on the mode value through a kernel objective function, where the bandwidth included is treated as a tuning parameter to achieve efficiency and robustness. For estimation, functional principal component basis functions are utilized to approximate the slope function and functional predictor variable, while B-spline functions are employed to approximate the varying coefficient component. Under mild regularity conditions, the convergence rates of the resulting estimators for the unknown slope function and varying coefficient are established under various cases. To numerically estimate the proposed model, we recommend employing a computationally efficient mode expectation–maximization algorithm with the aid of a Gaussian kernel. The tuning parameters are selected using the mode-based Bayesian information criterion and cross-validation procedures. Built upon the generalized likelihood technique, we further develop a goodness-of-fit test to assess the constancy of varying coefficient functions and put forward a wild bootstrap procedure for estimating the corresponding critical values. The finite sample performance of the developed estimators is illustrated through Monte Carlo simulations and real data analysis related to the Tecator data. The results produced by the propounded method are compared favorably with those obtained from alternative estimation techniques.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105402"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30DOI: 10.1016/j.jmva.2024.105403
Li Yanpeng , Xie Jiahui , Zhou Guoliang , Zhou Wang
High dimensional data analysis has attracted considerable interest and is facing new challenges, one of which is the increasingly available data with noise corrupted and in a streaming manner, such as signals and stocks. In this paper, we develop a sequential method to dynamically update the estimates of signal and noise strength in signal plus noise models. The proposed sequential method is easy to compute based on the stored statistics and the current data point. The consistency and, more importantly, the asymptotic normality of the estimators of signal strength and noise level are demonstrated for high dimensional settings under mild conditions. Simulations and real data examples are further provided to illustrate the practical utility of our proposal.
{"title":"Sequential estimation of high-dimensional signal plus noise models under general elliptical frameworks","authors":"Li Yanpeng , Xie Jiahui , Zhou Guoliang , Zhou Wang","doi":"10.1016/j.jmva.2024.105403","DOIUrl":"10.1016/j.jmva.2024.105403","url":null,"abstract":"<div><div>High dimensional data analysis has attracted considerable interest and is facing new challenges, one of which is the increasingly available data with noise corrupted and in a streaming manner, such as signals and stocks. In this paper, we develop a sequential method to dynamically update the estimates of signal and noise strength in signal plus noise models. The proposed sequential method is easy to compute based on the stored statistics and the current data point. The consistency and, more importantly, the asymptotic normality of the estimators of signal strength and noise level are demonstrated for high dimensional settings under mild conditions. Simulations and real data examples are further provided to illustrate the practical utility of our proposal.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105403"},"PeriodicalIF":1.4,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-15DOI: 10.1016/j.jmva.2024.105399
František Rublík
It is shown that the usual delete-1 jackknife variance estimator of the asymptotic variance of spatial median is consistent. This is proved under the assumptions that the dimension of the data , the sampled distribution possesses a density with respect to the Lebesgue measure and this density is bounded on every bounded subset of .
{"title":"On the consistency of the jackknife estimator of the asymptotic variance of spatial median","authors":"František Rublík","doi":"10.1016/j.jmva.2024.105399","DOIUrl":"10.1016/j.jmva.2024.105399","url":null,"abstract":"<div><div>It is shown that the usual delete-1 jackknife variance estimator of the asymptotic variance of spatial median is consistent. This is proved under the assumptions that the dimension of the data <span><math><mrow><mi>d</mi><mo>≥</mo><mn>3</mn></mrow></math></span>, the sampled distribution possesses a density with respect to the Lebesgue measure and this density is bounded on every bounded subset of <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span>.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"207 ","pages":"Article 105399"},"PeriodicalIF":1.4,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}