We describe a polar coordinate transformation of the anisotropy parameters of the Matérn covariance function, which provides two benefits over the standard parameterization. First, it identifies a single point (the origin) with the special case of isotropy. Second, the posterior distribution of the transformed anisotropic angle and ratio is approximately bell-shaped and unimodal even in the case of isotropy. This has advantages for parameter inference and density estimation. We also apply a transformation to the standard deviation and range such that they are approximately orthogonal. We demonstrate this parameter transformation through two simulated and two real data sets, and conclude by considering possible extensions, such as implementing this transformation for approximate Bayesian inference methods.
{"title":"A parameter transformation of the anisotropic Matérn covariance function","authors":"Kamal Rai, Patrick E. Brown","doi":"10.1002/cjs.11839","DOIUrl":"https://doi.org/10.1002/cjs.11839","url":null,"abstract":"<p>We describe a polar coordinate transformation of the anisotropy parameters of the Matérn covariance function, which provides two benefits over the standard parameterization. First, it identifies a single point (the origin) with the special case of isotropy. Second, the posterior distribution of the transformed anisotropic angle and ratio is approximately bell-shaped and unimodal even in the case of isotropy. This has advantages for parameter inference and density estimation. We also apply a transformation to the standard deviation and range such that they are approximately orthogonal. We demonstrate this parameter transformation through two simulated and two real data sets, and conclude by considering possible extensions, such as implementing this transformation for approximate Bayesian inference methods.</p>","PeriodicalId":55281,"journal":{"name":"Canadian Journal of Statistics-Revue Canadienne De Statistique","volume":"53 2","pages":""},"PeriodicalIF":0.8,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cjs.11839","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Berrettini, Christian Martin Hennig, Cinzia Viroli
Quantile-based classifiers can classify high-dimensional observations by minimizing a discrepancy of an observation to a class based on suitable quantiles of the within-class distributions, corresponding to a unique percentage for all variables. The present work extends these classifiers by introducing a way to determine potentially different optimal percentages for different variables. Furthermore, a variable-wise scale parameter is introduced. A simple greedy algorithm to estimate the parameters is proposed. Their consistency in a nonparametric setting is proved. Experiments using artificially generated and real data confirm the potential of the quantile-based classifier with variable-wise parameters.
{"title":"The quantile-based classifier with variable-wise parameters","authors":"Marco Berrettini, Christian Martin Hennig, Cinzia Viroli","doi":"10.1002/cjs.11837","DOIUrl":"https://doi.org/10.1002/cjs.11837","url":null,"abstract":"<p>Quantile-based classifiers can classify high-dimensional observations by minimizing a discrepancy of an observation to a class based on suitable quantiles of the within-class distributions, corresponding to a unique percentage for all variables. The present work extends these classifiers by introducing a way to determine potentially different optimal percentages for different variables. Furthermore, a variable-wise scale parameter is introduced. A simple greedy algorithm to estimate the parameters is proposed. Their consistency in a nonparametric setting is proved. Experiments using artificially generated and real data confirm the potential of the quantile-based classifier with variable-wise parameters.</p>","PeriodicalId":55281,"journal":{"name":"Canadian Journal of Statistics-Revue Canadienne De Statistique","volume":"53 2","pages":""},"PeriodicalIF":0.8,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cjs.11837","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many common clustering methods cannot be used for clustering balanced multivariate longitudinal data in cases where the covariance of variables is a function of the time points. In this article, a copula kernel mixture model (CKMM) is proposed for clustering data of this type. The CKMM is a finite mixture model that decomposes each mixture component's joint density function into a copula and marginal distribution functions. In this decomposition, the Gaussian copula is used due to its mathematical tractability and Gaussian kernel functions are used to estimate the marginal distributions. A generalized expectation-maximization algorithm is used to estimate the model parameters. The performance of the proposed model is assessed in a simulation study and on two real datasets. The proposed model is shown to have effective performance in comparison with standard methods, such as