Pub Date : 2025-03-10eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2475349
Binod Manandhar, Balgobin Nandram
We present predictive hierarchical Bayesian models to fit continuous, and positively skewed size data from small areas with the generalized beta of the second kind (GB2) distribution. We discuss three different GB2 mixture models. In the models, we have implemented the technique of small areas estimation. The posterior distributions of these models are complex. We have used Taylor series approximations, grid sampling and Metropolis samplers to fit the models. We have applied our models to the per-capita consumption size data from the second Nepal Living Standards Survey. We choose the best fitted model from the three GB2 mixture models. With the best fitted model, we provide small area estimation of poverty indicators by linking the survey data with the census data. A simulation study is provided.
{"title":"Hierarchical Bayesian models for small area estimation with GB2 distribution.","authors":"Binod Manandhar, Balgobin Nandram","doi":"10.1080/02664763.2025.2475349","DOIUrl":"https://doi.org/10.1080/02664763.2025.2475349","url":null,"abstract":"<p><p>We present predictive hierarchical Bayesian models to fit continuous, and positively skewed size data from small areas with the generalized beta of the second kind (GB2) distribution. We discuss three different GB2 mixture models. In the models, we have implemented the technique of small areas estimation. The posterior distributions of these models are complex. We have used Taylor series approximations, grid sampling and Metropolis samplers to fit the models. We have applied our models to the per-capita consumption size data from the second Nepal Living Standards Survey. We choose the best fitted model from the three GB2 mixture models. With the best fitted model, we provide small area estimation of poverty indicators by linking the survey data with the census data. A simulation study is provided.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 13","pages":"2448-2477"},"PeriodicalIF":1.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145232630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2473542
V Gómez-Rubio, J Lagos, F Palmí-Perales
Finding players with similar profiles is an important problem in sports such as football (also known as soccer in some countries). Scouting for new players requires a wealth of information about the available players so that similar profiles to that of a target player can be identified. However, information about the position of the players in the field is seldom employed. For this reason, a novel approach based on spatial data analysis is introduced to produce a spatial similarity index that can help to identify similar players. The use of this new spatial similarity index is illustrated with an example from the Spanish competition 'La Liga', season 2019-2020, in which hundreds of players are clustered according to their position in the field.
{"title":"Spatial similarity index for scouting in football.","authors":"V Gómez-Rubio, J Lagos, F Palmí-Perales","doi":"10.1080/02664763.2025.2473542","DOIUrl":"10.1080/02664763.2025.2473542","url":null,"abstract":"<p><p>Finding players with similar profiles is an important problem in sports such as football (also known as soccer in some countries). Scouting for new players requires a wealth of information about the available players so that similar profiles to that of a target player can be identified. However, information about the position of the players in the field is seldom employed. For this reason, a novel approach based on spatial data analysis is introduced to produce a spatial similarity index that can help to identify similar players. The use of this new spatial similarity index is illustrated with an example from the Spanish competition 'La Liga', season 2019-2020, in which hundreds of players are clustered according to their position in the field.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 14","pages":"2745-2758"},"PeriodicalIF":1.1,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12581730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145444885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2472150
Maximilian Linde, Jorge N Tendeiro, Don van Ravenzwaaij
The use of Cox proportional hazards regression to analyze time-to-event data is ubiquitous in biomedical research. Typically, the frequentist framework is used to draw conclusions about whether hazards are different between patients in an experimental and a control condition. We offer a procedure to compute Bayes factors for simple Cox models, both for the scenario where the full data are available and for the scenario where only summary statistics are available. The procedure is implemented in our 'baymedr' R package. The usage of Bayes factors remedies some shortcomings of frequentist inference and has the potential to save scarce resources.
{"title":"Bayes factors for two-group comparisons in Cox regression with an application for reverse-engineering raw data from summary statistics.","authors":"Maximilian Linde, Jorge N Tendeiro, Don van Ravenzwaaij","doi":"10.1080/02664763.2025.2472150","DOIUrl":"10.1080/02664763.2025.2472150","url":null,"abstract":"<p><p>The use of Cox proportional hazards regression to analyze time-to-event data is ubiquitous in biomedical research. Typically, the frequentist framework is used to draw conclusions about whether hazards are different between patients in an experimental and a control condition. We offer a procedure to compute Bayes factors for simple Cox models, both for the scenario where the full data are available and for the scenario where only summary statistics are available. The procedure is implemented in our 'baymedr' R package. The usage of Bayes factors remedies some shortcomings of frequentist inference and has the potential to save scarce resources.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 13","pages":"2413-2437"},"PeriodicalIF":1.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490364/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145232632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2455625
Adel Ahmadi Nadi, Ali Yeganeh, Sandile Charles Shongwe, Alireza Shadman
Online monitoring of the ratio of two random characteristics rather than monitoring their individual behaviors has many applications. For this aim, there are various control charts, known as RZ charts in the literature, e.g. Shewhart, memory-type and adaptive monitoring schemes, have been designed to detect the ratio's abnormal patterns as soon as possible. Most of the existing RZ charts rely on two assumptions about the process: (i) both individual characteristics are normally distributed, and (ii) the direction (upward or downward) of the RZ's deviation from its in-control (IC) state to an out-of-control (OC) condition is known. However, these assumptions can be violated in many practical situations. In recent years, applying the machine learning (ML) models in the Statistical Process Monitoring (SPM) area has provided several contributions compared to traditional statistical methods. However, ML-based control charts have not yet been discussed in the RZ monitoring literature. To this end, this study introduces a novel clustering-based control chart for monitoring RZ in Phase II. This method avoids making any assumptions about the direction of RZ's deviation and does not need to assume a specific distribution for the two random characteristics. Furthermore, it can estimate the Change Point (CP) in the process.
{"title":"An integrated change point detection and online monitoring approach for the ratio of two variables using clustering-based control charts.","authors":"Adel Ahmadi Nadi, Ali Yeganeh, Sandile Charles Shongwe, Alireza Shadman","doi":"10.1080/02664763.2025.2455625","DOIUrl":"10.1080/02664763.2025.2455625","url":null,"abstract":"<p><p>Online monitoring of the ratio of two random characteristics rather than monitoring their individual behaviors has many applications. For this aim, there are various control charts, known as RZ charts in the literature, e.g. Shewhart, memory-type and adaptive monitoring schemes, have been designed to detect the ratio's abnormal patterns as soon as possible. Most of the existing RZ charts rely on two assumptions about the process: (<i>i</i>) both individual characteristics are normally distributed, and (<i>ii</i>) the direction (upward or downward) of the RZ's deviation from its in-control (IC) state to an out-of-control (OC) condition is known. However, these assumptions can be violated in many practical situations. In recent years, applying the machine learning (ML) models in the Statistical Process Monitoring (SPM) area has provided several contributions compared to traditional statistical methods. However, ML-based control charts have not yet been discussed in the RZ monitoring literature. To this end, this study introduces a novel clustering-based control chart for monitoring RZ in Phase II. This method avoids making any assumptions about the direction of RZ's deviation and does not need to assume a specific distribution for the two random characteristics. Furthermore, it can estimate the Change Point (CP) in the process.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 11","pages":"2060-2093"},"PeriodicalIF":1.1,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12404067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144992745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2462969
Isaac Manring, Honglang Wang, George Mohler, Xenia Miscouridou
Spatiotemporal point process models have a rich history of effectively modeling event data in space and time. However, they are sometimes neglected due to the difficulty of implementing them. There is a lack of packages with the ability to perform inference for these models, particularly in python. Thus we present BSTPP a python package for Bayesian inference on spatiotemporal point processes. It offers three different kinds of models: space-time separable Log Gaussian Cox, Hawkes, and Cox Hawkes. Users may employ the predefined trigger parameterizations for the Hawkes models, or they may implement their own trigger functions with the extendable Trigger module. For the Cox models, posterior inference on the Gaussian processes is sped up with a pre-trained Variational Auto Encoder (VAE). The package includes a new flexible pre-trained VAE. We validate the model through simulation studies and then explore it by applying it to shooting data in Chicago.
{"title":"BSTPP: a python package for Bayesian spatiotemporal point processes.","authors":"Isaac Manring, Honglang Wang, George Mohler, Xenia Miscouridou","doi":"10.1080/02664763.2025.2462969","DOIUrl":"https://doi.org/10.1080/02664763.2025.2462969","url":null,"abstract":"<p><p>Spatiotemporal point process models have a rich history of effectively modeling event data in space and time. However, they are sometimes neglected due to the difficulty of implementing them. There is a lack of packages with the ability to perform inference for these models, particularly in python. Thus we present BSTPP a python package for Bayesian inference on spatiotemporal point processes. It offers three different kinds of models: space-time separable Log Gaussian Cox, Hawkes, and Cox Hawkes. Users may employ the predefined trigger parameterizations for the Hawkes models, or they may implement their own trigger functions with the extendable Trigger module. For the Cox models, posterior inference on the Gaussian processes is sped up with a pre-trained Variational Auto Encoder (VAE). The package includes a new flexible pre-trained VAE. We validate the model through simulation studies and then explore it by applying it to shooting data in Chicago.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 13","pages":"2524-2543"},"PeriodicalIF":1.1,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490397/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145232624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2459293
Tapio Nummi, Jyrki Möttönen, Pasi Väkeväinen, Janne Salonen, Timothy E O'Brien
When analyzing real data sets, statisticians often face the question that the data are heterogeneous and it may not necessarily be possible to model this heterogeneity directly. One natural option in this case is to use the methods based on finite mixtures. The key question in these techniques often is what is the best number of mixtures or, depending on the focus of the analysis, the best number of sub-populations when the model is otherwise fixed. Moreover, when the distribution of the response variable deviates from meeting the assumptions, it's common to employ an appropriate transformation to align the distribution with the model's requirements. To solve the problem in the mixture regression context we propose a technique based on the scaled Box-Cox transformation for normal mixtures. The specific focus here is on mixture regression for longitudinal data, the so-called trajectory analysis. We present interesting practical results as well as simulation experiments to demonstrate that our method yields reasonable results. Associated R-programs are also provided.
{"title":"On the improved estimation of the normal mixture components for longitudinal data.","authors":"Tapio Nummi, Jyrki Möttönen, Pasi Väkeväinen, Janne Salonen, Timothy E O'Brien","doi":"10.1080/02664763.2025.2459293","DOIUrl":"10.1080/02664763.2025.2459293","url":null,"abstract":"<p><p>When analyzing real data sets, statisticians often face the question that the data are heterogeneous and it may not necessarily be possible to model this heterogeneity directly. One natural option in this case is to use the methods based on finite mixtures. The key question in these techniques often is what is the best number of mixtures or, depending on the focus of the analysis, the best number of sub-populations when the model is otherwise fixed. Moreover, when the distribution of the response variable deviates from meeting the assumptions, it's common to employ an appropriate transformation to align the distribution with the model's requirements. To solve the problem in the mixture regression context we propose a technique based on the scaled Box-Cox transformation for normal mixtures. The specific focus here is on mixture regression for longitudinal data, the so-called trajectory analysis. We present interesting practical results as well as simulation experiments to demonstrate that our method yields reasonable results. Associated R-programs are also provided.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 12","pages":"2271-2290"},"PeriodicalIF":1.1,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145029994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-05eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2460072
Shameem Alam, Javid Shabbir, Malaika Nadeem
Adaptive cluster sampling is particularly helpful whenever the target population is unique, dispersed unevenly, concealed or difficult to find. In the current investigation, under an adaptive cluster sampling approach, we propose a ratio-product-logarithmic type estimator employing a single auxiliary variable for the estimation of finite population variance. The bias and mean square error of the proposed estimator are developed by using simulation as well as real data sets. The study results show that for estimating the finite population variance, the proposed estimator outperforms the competing estimators.
{"title":"On use of adaptive cluster sampling for variance estimation.","authors":"Shameem Alam, Javid Shabbir, Malaika Nadeem","doi":"10.1080/02664763.2025.2460072","DOIUrl":"https://doi.org/10.1080/02664763.2025.2460072","url":null,"abstract":"<p><p>Adaptive cluster sampling is particularly helpful whenever the target population is unique, dispersed unevenly, concealed or difficult to find. In the current investigation, under an adaptive cluster sampling approach, we propose a ratio-product-logarithmic type estimator employing a single auxiliary variable for the estimation of finite population variance. The bias and mean square error of the proposed estimator are developed by using simulation as well as real data sets. The study results show that for estimating the finite population variance, the proposed estimator outperforms the competing estimators.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 12","pages":"2291-2305"},"PeriodicalIF":1.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145029941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-05eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2461715
Marcos S Oliveira, Marcos O Prates, Christian E Galarza, Victor H Lachos
This study presents diagnostic techniques for Heckman selection models estimated using the EM algorithm. The focus is on the selection t and normal models, based on the bivariate Student's-t and bivariate normal distributions, respectively. The Heckman selection model is a key econometric tool for estimating relationships while addressing selection bias. Relying on the EM-type algorithm, we develop global and local influence analyses based on the conditional expectation of the complete-data log-likelihood function, exploring four perturbation schemes for local influence analysis. To assess the effectiveness of the proposed diagnostic measures in identifying influential observations, we conducted a simulation study, complemented by two real-data applications that demonstrate how these techniques can effectively identify influential points. The proposed algorithms and methodologies are incorporated into the R package HeckmanEM.
{"title":"Influence diagnostics in the Heckman selection models based on EM algorithms.","authors":"Marcos S Oliveira, Marcos O Prates, Christian E Galarza, Victor H Lachos","doi":"10.1080/02664763.2025.2461715","DOIUrl":"https://doi.org/10.1080/02664763.2025.2461715","url":null,"abstract":"<p><p>This study presents diagnostic techniques for Heckman selection models estimated using the EM algorithm. The focus is on the selection <i>t</i> and normal models, based on the bivariate Student's-<i>t</i> and bivariate normal distributions, respectively. The Heckman selection model is a key econometric tool for estimating relationships while addressing selection bias. Relying on the EM-type algorithm, we develop global and local influence analyses based on the conditional expectation of the complete-data log-likelihood function, exploring four perturbation schemes for local influence analysis. To assess the effectiveness of the proposed diagnostic measures in identifying influential observations, we conducted a simulation study, complemented by two real-data applications that demonstrate how these techniques can effectively identify influential points. The proposed algorithms and methodologies are incorporated into the R package HeckmanEM.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 13","pages":"2384-2412"},"PeriodicalIF":1.1,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490367/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145232640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2461186
Sang Gil Kang, Yongku Kim
Several methods have been developed for nonparametric regression problems, including classical approaches such as kernels, local polynomials, smoothing splines, sieves, and wavelets, as well as relatively new methods such as lasso, generalized lasso, and trend filtering. This study proposes an objective Bayesian trend filtering method based on model selection. The procedure followed in this study estimates the functions based on adaptive piecewise polynomial regression models with two components. First, we determine the intervals with varying trends using Bayesian binary segmentation and then evaluate the most reasonable trend via Bayesian model selection at these intervals. This trend filtering procedure follows Bayesian model selection that uses intrinsic priors, which eliminated any subjective input. Additionally, we prove that the proposed method using these intrinsic priors was consistent when applied to large sample sizes. The behavior of the proposed Bayesian trend filtering procedure is compared with the trend filtering using a simulation study and real examples. Finally, we apply the proposed method to detect the variance change points under mean changes, whereas the existing methods yielded inaccurate estimates of the variance change points when the mean varied smoothly, as the sudden-change assumption was violated in such cases.
{"title":"Objective Bayesian trend filtering via adaptive piecewise polynomial regression.","authors":"Sang Gil Kang, Yongku Kim","doi":"10.1080/02664763.2025.2461186","DOIUrl":"https://doi.org/10.1080/02664763.2025.2461186","url":null,"abstract":"<p><p>Several methods have been developed for nonparametric regression problems, including classical approaches such as kernels, local polynomials, smoothing splines, sieves, and wavelets, as well as relatively new methods such as lasso, generalized lasso, and trend filtering. This study proposes an objective Bayesian trend filtering method based on model selection. The procedure followed in this study estimates the functions based on adaptive piecewise polynomial regression models with two components. First, we determine the intervals with varying trends using Bayesian binary segmentation and then evaluate the most reasonable trend via Bayesian model selection at these intervals. This trend filtering procedure follows Bayesian model selection that uses intrinsic priors, which eliminated any subjective input. Additionally, we prove that the proposed method using these intrinsic priors was consistent when applied to large sample sizes. The behavior of the proposed Bayesian trend filtering procedure is compared with the trend filtering using a simulation study and real examples. Finally, we apply the proposed method to detect the variance change points under mean changes, whereas the existing methods yielded inaccurate estimates of the variance change points when the mean varied smoothly, as the sudden-change assumption was violated in such cases.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 13","pages":"2357-2383"},"PeriodicalIF":1.1,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145232665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-03eCollection Date: 2025-01-01DOI: 10.1080/02664763.2025.2458126
Sylwester Pia̧tek
Inequality (concentration) curves such as Lorenz, Bonferroni, Zenga curves, as well as a new inequality curve - the D curve, are broadly used to analyse inequalities in wealth and income distribution in certain populations. Quantile versions of these inequality curves are more robust to outliers. We discuss several parametric estimators of quantile versions of the Zenga and D curves. A minimum distance (MD) estimator is proposed for these two curves and the indices related to them. The consistency and asymptotic normality of the MD estimator is proved. The MD estimator can also be used to estimate the inequality measures corresponding to the quantile versions of the inequality curves. The estimation methods considered are illustrated in the case of the Weibull model, which has many applications in life sciences, for example, to fit the precipitation data. In econometrics it is also considered to fit incomes, especially in the case when a significant share of population have low incomes, for example, in less developed countries or among low-paid jobs.
{"title":"Parametric estimation of quantile versions of Zenga and D inequality curves: methodology and application to Weibull distribution.","authors":"Sylwester Pia̧tek","doi":"10.1080/02664763.2025.2458126","DOIUrl":"https://doi.org/10.1080/02664763.2025.2458126","url":null,"abstract":"<p><p>Inequality (concentration) curves such as Lorenz, Bonferroni, Zenga curves, as well as a new inequality curve - the <i>D</i> curve, are broadly used to analyse inequalities in wealth and income distribution in certain populations. Quantile versions of these inequality curves are more robust to outliers. We discuss several parametric estimators of quantile versions of the Zenga and <i>D</i> curves. A minimum distance (MD) estimator is proposed for these two curves and the indices related to them. The consistency and asymptotic normality of the MD estimator is proved. The MD estimator can also be used to estimate the inequality measures corresponding to the quantile versions of the inequality curves. The estimation methods considered are illustrated in the case of the Weibull model, which has many applications in life sciences, for example, to fit the precipitation data. In econometrics it is also considered to fit incomes, especially in the case when a significant share of population have low incomes, for example, in less developed countries or among low-paid jobs.</p>","PeriodicalId":15239,"journal":{"name":"Journal of Applied Statistics","volume":"52 12","pages":"2226-2246"},"PeriodicalIF":1.1,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145029907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}