We consider M-estimators and derive supremal-inequalities of exponential-or polynomial type according as a boundedness- or a moment-condition is fulfilled. This enables us to derive rates of r-complete convergence and also to show r-qick convergence in the sense of Strasser.
{"title":"Supremal inequalities for convex M-estimators with applications to complete and quick convergence","authors":"Dietmar Ferger","doi":"arxiv-2311.17623","DOIUrl":"https://doi.org/arxiv-2311.17623","url":null,"abstract":"We consider M-estimators and derive supremal-inequalities of exponential-or\u0000polynomial type according as a boundedness- or a moment-condition is fulfilled.\u0000This enables us to derive rates of r-complete convergence and also to show\u0000r-qick convergence in the sense of Strasser.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"91 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stochastic optimization methods encounter new challenges in the realm of streaming, characterized by a continuous flow of large, high-dimensional data. While first-order methods, like stochastic gradient descent, are the natural choice, they often struggle with ill-conditioned problems. In contrast, second-order methods, such as Newton's methods, offer a potential solution, but their computational demands render them impractical. This paper introduces adaptive stochastic optimization methods that bridge the gap between addressing ill-conditioned problems while functioning in a streaming context. Notably, we present an adaptive inversion-free Newton's method with a computational complexity matching that of first-order methods, $mathcal{O}(dN)$, where $d$ represents the number of dimensions/features, and $N$ the number of data. Theoretical analysis confirms their asymptotic efficiency, and empirical evidence demonstrates their effectiveness, especially in scenarios involving complex covariance structures and challenging initializations. In particular, our adaptive Newton's methods outperform existing methods, while maintaining favorable computational efficiency.
{"title":"On Adaptive Stochastic Optimization for Streaming Data: A Newton's Method with O(dN) Operations","authors":"Antoine Godichon-BaggioniLPSM, Nicklas Werge","doi":"arxiv-2311.17753","DOIUrl":"https://doi.org/arxiv-2311.17753","url":null,"abstract":"Stochastic optimization methods encounter new challenges in the realm of\u0000streaming, characterized by a continuous flow of large, high-dimensional data.\u0000While first-order methods, like stochastic gradient descent, are the natural\u0000choice, they often struggle with ill-conditioned problems. In contrast,\u0000second-order methods, such as Newton's methods, offer a potential solution, but\u0000their computational demands render them impractical. This paper introduces\u0000adaptive stochastic optimization methods that bridge the gap between addressing\u0000ill-conditioned problems while functioning in a streaming context. Notably, we\u0000present an adaptive inversion-free Newton's method with a computational\u0000complexity matching that of first-order methods, $mathcal{O}(dN)$, where $d$\u0000represents the number of dimensions/features, and $N$ the number of data.\u0000Theoretical analysis confirms their asymptotic efficiency, and empirical\u0000evidence demonstrates their effectiveness, especially in scenarios involving\u0000complex covariance structures and challenging initializations. In particular,\u0000our adaptive Newton's methods outperform existing methods, while maintaining\u0000favorable computational efficiency.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"89 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a generic class of emph{random optimization problems} (rops) and their typical behavior. The foundational aspects of the random duality theory (RDT), associated with rops, were discussed in cite{StojnicRegRndDlt10}, where it was shown that one can often infer rops' behavior even without actually solving them. Moreover, cite{StojnicRegRndDlt10} uncovered that various quantities relevant to rops (including, for example, their typical objective values) can be determined (in a large dimensional context) even completely analytically. The key observation was that the emph{strong deterministic duality} implies the, so-called, emph{strong random duality} and therefore the full exactness of the analytical RDT characterizations. Here, we attack precisely those scenarios where the strong deterministic duality is not necessarily present and connect them to the recent progress made in studying bilinearly indexed (bli) random processes in cite{Stojnicnflgscompyx23,Stojnicsflgscompyx23}. In particular, utilizing a fully lifted (fl) interpolating comparison mechanism introduced in cite{Stojnicnflgscompyx23}, we establish corresponding emph{fully lifted} RDT (fl RDT). We then rely on a stationarized fl interpolation realization introduced in cite{Stojnicsflgscompyx23} to obtain complete emph{statitionarized} fl RDT (sfl RDT). A few well known problems are then discussed as illustrations of a wide range of practical applications implied by the generality of the considered rops.
{"title":"Fully lifted random duality theory","authors":"Mihailo Stojnic","doi":"arxiv-2312.00070","DOIUrl":"https://doi.org/arxiv-2312.00070","url":null,"abstract":"We study a generic class of emph{random optimization problems} (rops) and\u0000their typical behavior. The foundational aspects of the random duality theory\u0000(RDT), associated with rops, were discussed in cite{StojnicRegRndDlt10}, where\u0000it was shown that one can often infer rops' behavior even without actually\u0000solving them. Moreover, cite{StojnicRegRndDlt10} uncovered that various\u0000quantities relevant to rops (including, for example, their typical objective\u0000values) can be determined (in a large dimensional context) even completely\u0000analytically. The key observation was that the emph{strong deterministic\u0000duality} implies the, so-called, emph{strong random duality} and therefore the\u0000full exactness of the analytical RDT characterizations. Here, we attack\u0000precisely those scenarios where the strong deterministic duality is not\u0000necessarily present and connect them to the recent progress made in studying\u0000bilinearly indexed (bli) random processes in\u0000cite{Stojnicnflgscompyx23,Stojnicsflgscompyx23}. In particular, utilizing a\u0000fully lifted (fl) interpolating comparison mechanism introduced in\u0000cite{Stojnicnflgscompyx23}, we establish corresponding emph{fully lifted} RDT\u0000(fl RDT). We then rely on a stationarized fl interpolation realization\u0000introduced in cite{Stojnicsflgscompyx23} to obtain complete\u0000emph{statitionarized} fl RDT (sfl RDT). A few well known problems are then\u0000discussed as illustrations of a wide range of practical applications implied by\u0000the generality of the considered rops.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ensemble methods combine the predictions of several base models. We study whether or not including more models in an ensemble always improve its average performance. Such a question depends on the kind of ensemble considered, as well as the predictive metric chosen. We focus on situations where all members of the ensemble are a priori expected to perform as well, which is the case of several popular methods like random forests or deep ensembles. In this setting, we essentially show that ensembles are getting better all the time if, and only if, the considered loss function is convex. More precisely, in that case, the average loss of the ensemble is a decreasing function of the number of models. When the loss function is nonconvex, we show a series of results that can be summarised by the insight that ensembles of good models keep getting better, and ensembles of bad models keep getting worse. To this end, we prove a new result on the monotonicity of tail probabilities that may be of independent interest. We illustrate our results on a simple machine learning problem (diagnosing melanomas using neural nets).
{"title":"Are ensembles getting better all the time?","authors":"Pierre-Alexandre Mattei, Damien Garreau","doi":"arxiv-2311.17885","DOIUrl":"https://doi.org/arxiv-2311.17885","url":null,"abstract":"Ensemble methods combine the predictions of several base models. We study\u0000whether or not including more models in an ensemble always improve its average\u0000performance. Such a question depends on the kind of ensemble considered, as\u0000well as the predictive metric chosen. We focus on situations where all members\u0000of the ensemble are a priori expected to perform as well, which is the case of\u0000several popular methods like random forests or deep ensembles. In this setting,\u0000we essentially show that ensembles are getting better all the time if, and only\u0000if, the considered loss function is convex. More precisely, in that case, the\u0000average loss of the ensemble is a decreasing function of the number of models.\u0000When the loss function is nonconvex, we show a series of results that can be\u0000summarised by the insight that ensembles of good models keep getting better,\u0000and ensembles of bad models keep getting worse. To this end, we prove a new\u0000result on the monotonicity of tail probabilities that may be of independent\u0000interest. We illustrate our results on a simple machine learning problem\u0000(diagnosing melanomas using neural nets).","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"92 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aksel Kaastrup Rasmussen, Fanny Seizilles, Mark Girolami, Ieva Kazlauskaite
In this paper we investigate the Bayesian approach to inverse Robin problems. These are problems for certain elliptic boundary value problems of determining a Robin coefficient on a hidden part of the boundary from Cauchy data on the observable part. Such a nonlinear inverse problem arises naturally in the initialisation of large-scale ice sheet models that are crucial in climate and sea-level predictions. We motivate the Bayesian approach for a prototypical Robin inverse problem by showing that the posterior mean converges in probability to the data-generating ground truth as the number of observations increases. Related to the stability theory for inverse Robin problems, we establish a logarithmic convergence rate for Sobolev-regular Robin coefficients, whereas for analytic coefficients we can attain an algebraic rate. The use of rescaled analytic Gaussian priors in posterior consistency for nonlinear inverse problems is new and may be of separate interest in other inverse problems. Our numerical results illustrate the convergence property in two observation settings.
{"title":"The Bayesian approach to inverse Robin problems","authors":"Aksel Kaastrup Rasmussen, Fanny Seizilles, Mark Girolami, Ieva Kazlauskaite","doi":"arxiv-2311.17542","DOIUrl":"https://doi.org/arxiv-2311.17542","url":null,"abstract":"In this paper we investigate the Bayesian approach to inverse Robin problems.\u0000These are problems for certain elliptic boundary value problems of determining\u0000a Robin coefficient on a hidden part of the boundary from Cauchy data on the\u0000observable part. Such a nonlinear inverse problem arises naturally in the\u0000initialisation of large-scale ice sheet models that are crucial in climate and\u0000sea-level predictions. We motivate the Bayesian approach for a prototypical\u0000Robin inverse problem by showing that the posterior mean converges in\u0000probability to the data-generating ground truth as the number of observations\u0000increases. Related to the stability theory for inverse Robin problems, we\u0000establish a logarithmic convergence rate for Sobolev-regular Robin\u0000coefficients, whereas for analytic coefficients we can attain an algebraic\u0000rate. The use of rescaled analytic Gaussian priors in posterior consistency for\u0000nonlinear inverse problems is new and may be of separate interest in other\u0000inverse problems. Our numerical results illustrate the convergence property in\u0000two observation settings.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"91 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filamentary structures, also called ridges, generalize the concept of modes of density functions and provide low-dimensional representations of point clouds. Using kernel type plug-in estimators, we give asymptotic confidence regions for filamentary structures based on two bootstrap approaches: multiplier bootstrap and empirical bootstrap. Our theoretical framework respects the topological structure of ridges by allowing the possible existence of intersections. Different asymptotic behaviors of the estimators are analyzed depending on how flat the ridges are, and our confidence regions are shown to be asymptotically valid in different scenarios in a unified form. As a critical step in the derivation, we approximate the suprema of the relevant empirical processes by those of Gaussian processes, which are degenerate in our problem and are handled by anti-concentration inequalities for Gaussian processes that do not require positive infimum variance.
{"title":"Confidence Regions for Filamentary Structures","authors":"Wanli Qiao","doi":"arxiv-2311.17831","DOIUrl":"https://doi.org/arxiv-2311.17831","url":null,"abstract":"Filamentary structures, also called ridges, generalize the concept of modes\u0000of density functions and provide low-dimensional representations of point\u0000clouds. Using kernel type plug-in estimators, we give asymptotic confidence\u0000regions for filamentary structures based on two bootstrap approaches:\u0000multiplier bootstrap and empirical bootstrap. Our theoretical framework\u0000respects the topological structure of ridges by allowing the possible existence\u0000of intersections. Different asymptotic behaviors of the estimators are analyzed\u0000depending on how flat the ridges are, and our confidence regions are shown to\u0000be asymptotically valid in different scenarios in a unified form. As a critical\u0000step in the derivation, we approximate the suprema of the relevant empirical\u0000processes by those of Gaussian processes, which are degenerate in our problem\u0000and are handled by anti-concentration inequalities for Gaussian processes that\u0000do not require positive infimum variance.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In randomized experiments with non-compliance scholars have argued that the complier average causal effect (CACE) ought to be the main causal estimand. The literature on inference of the complier average treatment effect (CACE) has focused on inference about the population CACE. However, in general individuals in the experiments are volunteers. This means that there is a risk that individuals partaking in a given experiment differ in important ways from a population of interest. It is thus of interest to focus on the sample at hand and have easy to use and correct procedures for inference about the sample CACE. We consider a more general setting than in the previous literature and construct a confidence interval based on the Wald estimator in the form of a finite closed interval that is familiar to practitioners. Furthermore, with the access of pre-treatment covariates, we propose a new regression adjustment estimator and associated methods for constructing confidence intervals. Finite sample performance of the methods is examined through a Monte Carlo simulation and the methods are used in an application to a job training experiment.
{"title":"Inference of Sample Complier Average Causal Effects in Completely Randomized Experiments","authors":"Zhen Zhong, Per Johansson, Junni L. Zhang","doi":"arxiv-2311.17476","DOIUrl":"https://doi.org/arxiv-2311.17476","url":null,"abstract":"In randomized experiments with non-compliance scholars have argued that the\u0000complier average causal effect (CACE) ought to be the main causal estimand. The\u0000literature on inference of the complier average treatment effect (CACE) has\u0000focused on inference about the population CACE. However, in general individuals\u0000in the experiments are volunteers. This means that there is a risk that\u0000individuals partaking in a given experiment differ in important ways from a\u0000population of interest. It is thus of interest to focus on the sample at hand\u0000and have easy to use and correct procedures for inference about the sample\u0000CACE. We consider a more general setting than in the previous literature and\u0000construct a confidence interval based on the Wald estimator in the form of a\u0000finite closed interval that is familiar to practitioners. Furthermore, with the\u0000access of pre-treatment covariates, we propose a new regression adjustment\u0000estimator and associated methods for constructing confidence intervals. Finite\u0000sample performance of the methods is examined through a Monte Carlo simulation\u0000and the methods are used in an application to a job training experiment.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"82 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Ballesteros, Ramsés H. Mena, José Luis Pérez, Gabor Toth
We study the problem of community detection in a general version of the block spin Ising model featuring M groups, a model inspired by the Curie-Weiss model of ferromagnetism in statistical mechanics. We solve the general problem of identifying any number of groups with any possible coupling constants. Up to now, the problem was only solved for the specific situation with two groups of identical size and identical interactions. Our results can be applied to the most realistic situations, in which there are many groups of different sizes and different interactions. In addition, we give an explicit algorithm that permits the reconstruction of the structure of the model from a sample of observations based on the comparison of empirical correlations of the spin variables, thus unveiling easy applications of the model to real-world voting data and communities in biology.
{"title":"Detection of an Arbitrary Number of Communities in a Block Spin Ising Model","authors":"Miguel Ballesteros, Ramsés H. Mena, José Luis Pérez, Gabor Toth","doi":"arxiv-2311.18112","DOIUrl":"https://doi.org/arxiv-2311.18112","url":null,"abstract":"We study the problem of community detection in a general version of the block\u0000spin Ising model featuring M groups, a model inspired by the Curie-Weiss model\u0000of ferromagnetism in statistical mechanics. We solve the general problem of\u0000identifying any number of groups with any possible coupling constants. Up to\u0000now, the problem was only solved for the specific situation with two groups of\u0000identical size and identical interactions. Our results can be applied to the\u0000most realistic situations, in which there are many groups of different sizes\u0000and different interactions. In addition, we give an explicit algorithm that\u0000permits the reconstruction of the structure of the model from a sample of\u0000observations based on the comparison of empirical correlations of the spin\u0000variables, thus unveiling easy applications of the model to real-world voting\u0000data and communities in biology.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"90 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andriette Bekker, Matthias Wagener, Muhammad Arashi
Many generalised distributions exist for modelling data with vastly diverse characteristics. However, very few of these generalisations of the normal distribution have shape parameters with clear roles that determine, for instance, skewness and tail shape. In this chapter, we review existing skewing mechanisms and their properties in detail. Using the knowledge acquired, we add a skewness parameter to the body-tail generalised normal distribution cite{BTGN}, that yields the ac{FIN} with parameters for location, scale, body-shape, skewness, and tail weight. Basic statistical properties of the ac{FIN} are provided, such as the ac{PDF}, cumulative distribution function, moments, and likelihood equations. Additionally, the ac{FIN} ac{PDF} is extended to a multivariate setting using a student t-copula, yielding the ac{MFIN}. The ac{MFIN} is applied to stock returns data, where it outperforms the t-copula multivariate generalised hyperbolic, Azzalini skew-t, hyperbolic, and normal inverse Gaussian distributions.
{"title":"In search of the perfect fit: interpretation, flexible modelling, and the existing generalisations of the normal distribution","authors":"Andriette Bekker, Matthias Wagener, Muhammad Arashi","doi":"arxiv-2311.17962","DOIUrl":"https://doi.org/arxiv-2311.17962","url":null,"abstract":"Many generalised distributions exist for modelling data with vastly diverse\u0000characteristics. However, very few of these generalisations of the normal\u0000distribution have shape parameters with clear roles that determine, for\u0000instance, skewness and tail shape. In this chapter, we review existing skewing\u0000mechanisms and their properties in detail. Using the knowledge acquired, we add\u0000a skewness parameter to the body-tail generalised normal distribution\u0000cite{BTGN}, that yields the ac{FIN} with parameters for location, scale,\u0000body-shape, skewness, and tail weight. Basic statistical properties of the\u0000ac{FIN} are provided, such as the ac{PDF}, cumulative distribution function,\u0000moments, and likelihood equations. Additionally, the ac{FIN} ac{PDF} is\u0000extended to a multivariate setting using a student t-copula, yielding the\u0000ac{MFIN}. The ac{MFIN} is applied to stock returns data, where it outperforms\u0000the t-copula multivariate generalised hyperbolic, Azzalini skew-t, hyperbolic,\u0000and normal inverse Gaussian distributions.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"85 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider two finite mixture models (FMMs), with inverted-Kumaraswamy distributed components' lifetimes. Several stochastic ordering results between the FMMs have been obtained. Mainly, we focus on three different cases in terms of the heterogeneity of parameters. The usual stochastic order between the FMMs have been established when heterogeneity presents in one parameter as well as two parameters. In addition, we have also studied ageing faster order in terms of the reversed hazard rate between two FMMs when heterogeneity is in two parameters. For the case of heterogeneity in three parameters, we obtain the comparison results based on reversed hazard rate and likelihood ratio orders. The theoretical developments have been illustrated using several examples and counterexamples.
{"title":"Stochastic orderings between two finite mixture models with inverted-Kumaraswamy distributed components","authors":"Raju Bhakta, Pradip Kundu, Suchandan Kayal","doi":"arxiv-2311.17568","DOIUrl":"https://doi.org/arxiv-2311.17568","url":null,"abstract":"In this paper, we consider two finite mixture models (FMMs), with\u0000inverted-Kumaraswamy distributed components' lifetimes. Several stochastic\u0000ordering results between the FMMs have been obtained. Mainly, we focus on three\u0000different cases in terms of the heterogeneity of parameters. The usual\u0000stochastic order between the FMMs have been established when heterogeneity\u0000presents in one parameter as well as two parameters. In addition, we have also\u0000studied ageing faster order in terms of the reversed hazard rate between two\u0000FMMs when heterogeneity is in two parameters. For the case of heterogeneity in\u0000three parameters, we obtain the comparison results based on reversed hazard\u0000rate and likelihood ratio orders. The theoretical developments have been\u0000illustrated using several examples and counterexamples.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"83 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}