Pub Date : 2023-08-07DOI: 10.19139/soic-2310-5070-1082
Whatmore Sengweni, B. Oluyede, B. Makubate
A new family of distributions called the Marshall-Olkin Topp-Leone Half-Logistic-G (MO-TLHL-G) family of distributions is proposed and studied. Structural properties of the new family of distributions including moments, incomplete moments, distribution of the order statistics, and Renyi entropy are derived. The maximum likelihood estimation technique is used to estimate the model parameters. A simulation study to examine the bias and mean square error of the maximum likelihood estimators and applications to real data sets to illustrates the usefulness of the generalized distribution are given.
提出并研究了一种新的分布族,称为Marshall-Olkin Topp-Leone半logistic - g (MO-TLHL-G)分布族。导出了新的分布族的结构性质,包括矩、不完全矩、序统计量分布和Renyi熵。采用极大似然估计技术对模型参数进行估计。通过仿真研究检验了极大似然估计的偏差和均方误差,并在实际数据集上进行了应用,以说明广义分布的有效性。
{"title":"The Marshall-Olkin Topp-Leone Half-Logistic-G Family of Distributions with Applications","authors":"Whatmore Sengweni, B. Oluyede, B. Makubate","doi":"10.19139/soic-2310-5070-1082","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1082","url":null,"abstract":"A new family of distributions called the Marshall-Olkin Topp-Leone Half-Logistic-G (MO-TLHL-G) family of distributions is proposed and studied. Structural properties of the new family of distributions including moments, incomplete moments, distribution of the order statistics, and Renyi entropy are derived. The maximum likelihood estimation technique is used to estimate the model parameters. A simulation study to examine the bias and mean square error of the maximum likelihood estimators and applications to real data sets to illustrates the usefulness of the generalized distribution are given.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122351993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-06DOI: 10.19139/soic-2310-5070-1657
Khalid Belabbes, El Hachloufi Mostafa, Guennoun Zine El Abidine
The focus of any portfolio optimization problem is to imitate the stock markets and propose the optimal solutions to dealing with diverse investor expectations. In this paper, we propose new multi-period portfolio optimization problems when security returns are uncertain variables, given by experts’ estimations, and take the Tail value at risk (TVaR) as a coherent risk measure of investment in the framework of uncertainty theory. Real- constraints, in which transaction costs, liquidity of securities, and portfolio diversification, are taken into account. Equivalent deterministic forms of mean–TVaR models are proposed under the assumption that returns and liquidity of the securities obey some types of uncertainty distributions. We adapted the Delphi method in order to evaluate the expected, the standard deviation and the turnover rates values of returns of the given securities. Finally, numerical examples are given to illustrate the effectiveness of the proposed models.
{"title":"Mean-TVaR Models for Diversified Multi-period Portfolio Optimization with Realistic Factors based on Uncertainty Theory","authors":"Khalid Belabbes, El Hachloufi Mostafa, Guennoun Zine El Abidine","doi":"10.19139/soic-2310-5070-1657","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1657","url":null,"abstract":"The focus of any portfolio optimization problem is to imitate the stock markets and propose the optimal solutions to dealing with diverse investor expectations. In this paper, we propose new multi-period portfolio optimization problems when security returns are uncertain variables, given by experts’ estimations, and take the Tail value at risk (TVaR) as a coherent risk measure of investment in the framework of uncertainty theory. Real- constraints, in which transaction costs, liquidity of securities, and portfolio diversification, are taken into account. Equivalent deterministic forms of mean–TVaR models are proposed under the assumption that returns and liquidity of the securities obey some types of uncertainty distributions. We adapted the Delphi method in order to evaluate the expected, the standard deviation and the turnover rates values of returns of the given securities. Finally, numerical examples are given to illustrate the effectiveness of the proposed models.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121179201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-06DOI: 10.19139/soic-2310-5070-1392
Manal M. Salem, Moshira A. Ismail
In this paper, a discrete new generalized two parameter Lindley distribution is proposed. Discrete Lindley and Geometric distributions are sub-models of the proposed distribution. Its probability mass function exhibits different shapes including decreasing, unimodal and decreasing-increasing-decreasing. Our proposed distribution has only two-parameters and its hazard rate function can accommodate increasing, constant, decreasing and bathtub shapes. Moreover, this distribution can describe equi and over dispersed data. Several distributional properties are obtained and several reliability characteristics are derived such as cumulative distribution function, hazard rate function, second hazard rate function, mean residual life function, reverse hazard rate function, accumulated hazard rate function and also its order statistics. In addition, the study of the shapes of the hazard rate function is provided analytically and also by plots. Estimation of the parameters is done using the maximum likelihood method. A simulation study is conducted to assess the performance of the maximum likelihood estimators. Finally, the flexibility of the model is illustrated using three real data sets.
{"title":"A Discrete New Generalized Two Parameter Lindley Distribution: Properties, Estimation and Applications","authors":"Manal M. Salem, Moshira A. Ismail","doi":"10.19139/soic-2310-5070-1392","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1392","url":null,"abstract":"In this paper, a discrete new generalized two parameter Lindley distribution is proposed. Discrete Lindley and Geometric distributions are sub-models of the proposed distribution. Its probability mass function exhibits different shapes including decreasing, unimodal and decreasing-increasing-decreasing. Our proposed distribution has only two-parameters and its hazard rate function can accommodate increasing, constant, decreasing and bathtub shapes. Moreover, this distribution can describe equi and over dispersed data. Several distributional properties are obtained and several reliability characteristics are derived such as cumulative distribution function, hazard rate function, second hazard rate function, mean residual life function, reverse hazard rate function, accumulated hazard rate function and also its order statistics. In addition, the study of the shapes of the hazard rate function is provided analytically and also by plots. Estimation of the parameters is done using the maximum likelihood method. A simulation study is conducted to assess the performance of the maximum likelihood estimators. Finally, the flexibility of the model is illustrated using three real data sets.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132099278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-05DOI: 10.19139/soic-2310-5070-1691
M. Mohammadi, Mahdi Emadi
We introduce new nonparametric independence tests based on R'enyi and Tsallis divergence measures and copula density function. These tests reduce the complexity of calculations because they only depend on the copula density. The copula density estimated using the local likelihood probit-transformation method is appropriate for the identification of independence. Also, we present the consistency of the copula-based R'enyi and Tsallis divergence measures estimators that are considered as test statistics. A simulation study is provided to compare the empirical power of these new tests with the independence test based on the empirical copula. The simulation results show that the suggested tests outperform in weak dependency. Finally, an application in hydrology is presented.
{"title":"Nonparametric tests of independence using copula-based Renyi and Tsallis divergence measures","authors":"M. Mohammadi, Mahdi Emadi","doi":"10.19139/soic-2310-5070-1691","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1691","url":null,"abstract":"We introduce new nonparametric independence tests based on R'enyi and Tsallis divergence measures and copula density function. These tests reduce the complexity of calculations because they only depend on the copula density. The copula density estimated using the local likelihood probit-transformation method is appropriate for the identification of independence. Also, we present the consistency of the copula-based R'enyi and Tsallis divergence measures estimators that are considered as test statistics. A simulation study is provided to compare the empirical power of these new tests with the independence test based on the empirical copula. The simulation results show that the suggested tests outperform in weak dependency. Finally, an application in hydrology is presented.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116576767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-05DOI: 10.19139/soic-2310-5070-1732
Trang Bui Thuy, Cao Xuan Phuong
We study the strong consistency of a deconvolution estimator of cumulative distribution function when the distribution of error variable is assumed to be known exactly and ordinary smooth as well as supersmooth.
研究了当误差变量的分布是精确已知的、普通光滑和超光滑时,累积分布函数的反褶积估计量的强相合性。
{"title":"Strong consistency of a deconvolution estimator of cumulative distribution function","authors":"Trang Bui Thuy, Cao Xuan Phuong","doi":"10.19139/soic-2310-5070-1732","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1732","url":null,"abstract":"We study the strong consistency of a deconvolution estimator of cumulative distribution function when the distribution of error variable is assumed to be known exactly and ordinary smooth as well as supersmooth.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130311835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-05DOI: 10.19139/soic-2310-5070-1681
F. Shahsanaei, Rahim Chinipardaz
This paper introduces a new family of multimodal and skew-symmetric circular distributions, namely, the sine-cosine weighted circular distribution. The fundamental properties of this family are examined in the context of a general case and three specific examples. Additionally, general solutions for estimating the parameters of any sine-cosine weighted circular distribution using maximum likelihood are provided. A likelihood-ratio test is performed to check the symmetry of the data. Lastly, two examples are presented that illustrate how the proposed model may be utilized to analyze two real-world case studies with asymmetric datasets.
{"title":"Sine-Cosine Weighted Circular Distributions","authors":"F. Shahsanaei, Rahim Chinipardaz","doi":"10.19139/soic-2310-5070-1681","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1681","url":null,"abstract":"This paper introduces a new family of multimodal and skew-symmetric circular distributions, namely, the sine-cosine weighted circular distribution. The fundamental properties of this family are examined in the context of a general case and three specific examples. Additionally, general solutions for estimating the parameters of any sine-cosine weighted circular distribution using maximum likelihood are provided. A likelihood-ratio test is performed to check the symmetry of the data. Lastly, two examples are presented that illustrate how the proposed model may be utilized to analyze two real-world case studies with asymmetric datasets.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115020633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-05DOI: 10.19139/soic-2310-5070-1494
Pratishtha Batra, Neil Spencer, Pritam Ranjan
Factorial designs are often used in various industrial and sociological experiments to identify significant factors and factor combinations that may affect the process re- sponse. In the statistics literature, several studies have investigated the analysis, con- struction, and isomorphism of factorial and fractional factorial designs. When there are multiple choices for a design, it is helpful to have an easy-to-use tool for identifying which are distinct, and which of those can be efficiently analyzed/has good theoretical properties. For this task, we present an R library called IsoCheck that checks the isomorphism of multi-stage 26n factorial experiments with randomization restrictions. Through representing the factors and their combinations as a finite projective geometry, IsoCheck recasts the problem of searching over all possible relabelings as a search over collineations, then exploits projective geometric properties of the space to make the search much more efficient. Furthermore, a bitstring representation of the factorial effects is used to characterize all possible rearrangements of designs, thus facilitating quick comparisons after relabeling. This paper presents several detailed examples with R codes that illustrate the usage of the main functions in IsoCheck. Besides checking equivalence and isomorphism of 2^n multi-stage factorial designs, we demonstrate how the functions of the package can be used to create a catalog of all non-isomorphic designs, and good designs as per a suitably defined ranking criterion.
{"title":"Isomorphism Check for Two-level Multi-Stage Factorial Designs with Randomization Restrictions via an R Package: IsoCheck","authors":"Pratishtha Batra, Neil Spencer, Pritam Ranjan","doi":"10.19139/soic-2310-5070-1494","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1494","url":null,"abstract":"\u0000 \u0000 \u0000Factorial designs are often used in various industrial and sociological experiments to identify significant factors and factor combinations that may affect the process re- sponse. In the statistics literature, several studies have investigated the analysis, con- struction, and isomorphism of factorial and fractional factorial designs. When there are multiple choices for a design, it is helpful to have an easy-to-use tool for identifying which are distinct, and which of those can be efficiently analyzed/has good theoretical properties. For this task, we present an R library called IsoCheck that checks the isomorphism of multi-stage 26n factorial experiments with randomization restrictions. Through representing the factors and their combinations as a finite projective geometry, IsoCheck recasts the problem of searching over all possible relabelings as a search over collineations, then exploits projective geometric properties of the space to make the search much more efficient. Furthermore, a bitstring representation of the factorial effects is used to characterize all possible rearrangements of designs, thus facilitating quick comparisons after relabeling. \u0000This paper presents several detailed examples with R codes that illustrate the usage of the main functions in IsoCheck. Besides checking equivalence and isomorphism of 2^n multi-stage factorial designs, we demonstrate how the functions of the package can be used to create a catalog of all non-isomorphic designs, and good designs as per a suitably defined ranking criterion. \u0000 \u0000 \u0000","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132219544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.19139/soic-2310-5070-1634
Russul Mohsin, Vahid Rezaei Tabar
We provide the learning of a DAG model arising from high dimensional random variables following both normal and non-normal assumptions. To this end, the copula function utilized connecting dependent variables. Moreover to normal copula, the three most applicable copulas have been investigated modeling all three dependence structures negative, positive, and weak kinds. The copula functions, FGM, Clayton, and Gumbel are considered coving these situations and their detailed calculations are also presented. In addition, the structure function has been exactly determined due to choosing a good copula model based on statistical software R with respect to any assumed direction among all nodes. The direction with the maximum structure function has been preferred. The corresponding algorithms finding these directions and the maximization procedures are also provided. Finally, some extensive tabulations and simulation studies are provided, and in the following to have a clear thought of provided strategies, a real world application has been analyzed.
{"title":"Copula based learning for directed acyclic graphs","authors":"Russul Mohsin, Vahid Rezaei Tabar","doi":"10.19139/soic-2310-5070-1634","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1634","url":null,"abstract":"We provide the learning of a DAG model arising from high dimensional random variables following both normal and non-normal assumptions. To this end, the copula function utilized connecting dependent variables. Moreover to normal copula, the three most applicable copulas have been investigated modeling all three dependence structures negative, positive, and weak kinds. The copula functions, FGM, Clayton, and Gumbel are considered coving these situations and their detailed calculations are also presented. In addition, the structure function has been exactly determined due to choosing a good copula model based on statistical software R with respect to any assumed direction among all nodes. The direction with the maximum structure function has been preferred. The corresponding algorithms finding these directions and the maximization procedures are also provided. Finally, some extensive tabulations and simulation studies are provided, and in the following to have a clear thought of provided strategies, a real world application has been analyzed.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131527375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.19139/soic-2310-5070-1773
H. Saboori, Mahdi Doostparast
Tree-based algorithms are a class of useful, versatile, and popular tools in data mining and machine learning.Indeed, tree aggregation methods, such as random forests, are among the most powerful approaches to boostthe performance of predictions. In this article, we apply tree-based methods to model and predict discretedata, using a highly flexible model. Inflation may occur in discrete data at some points. Inflation can beat points as zero, one or the other. We may even have inflation at two points or more. We use models forinflated data sets based on a common discrete family (the Power series models). The Power series modelsare one of the most famous families used in such models. This family includes common discrete models suchas the Poisson, Negative Binomial, Multinomial, and Logarithmic series models.The main idea of this article is to use zero to k (k = 0, 1, . . .) inflated regression models based on the familyof power series to fit decision regression trees and random forests. An important point of these models isthat they can be used not only for inflated discrete data but also for non-inflated discrete data. Indeed thismodel can be used for a wide range of discrete data sets.
{"title":"Random forests in the zero to k inflated Power series populations","authors":"H. Saboori, Mahdi Doostparast","doi":"10.19139/soic-2310-5070-1773","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1773","url":null,"abstract":"Tree-based algorithms are a class of useful, versatile, and popular tools in data mining and machine learning.Indeed, tree aggregation methods, such as random forests, are among the most powerful approaches to boostthe performance of predictions. In this article, we apply tree-based methods to model and predict discretedata, using a highly flexible model. Inflation may occur in discrete data at some points. Inflation can beat points as zero, one or the other. We may even have inflation at two points or more. We use models forinflated data sets based on a common discrete family (the Power series models). The Power series modelsare one of the most famous families used in such models. This family includes common discrete models suchas the Poisson, Negative Binomial, Multinomial, and Logarithmic series models.The main idea of this article is to use zero to k (k = 0, 1, . . .) inflated regression models based on the familyof power series to fit decision regression trees and random forests. An important point of these models isthat they can be used not only for inflated discrete data but also for non-inflated discrete data. Indeed thismodel can be used for a wide range of discrete data sets.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128599524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.19139/soic-2310-5070-1226
A. H. Khammar, Vahideh Ahrari, Seyed Mahdi Amir Jahanshahi
Extropy is a measure of the uncertainty of a random variable. Motivated with the wideapplicability of quantile functions in modeling and analyzing statistical data, in this paper, we studyquantile version of the extropy from residual lifetime variable, "residual quantile extropy" in short.Unlike the residual extropy function, the residual quantile extropy determines the quantile densityfunction uniquely through a simple relationship. Aging classes, stochastic orders and characterizationresults are derived, using proposed quantile measure of uncertainty. We also suggest some applicationsrelated to (n i + 1)-out-of-n systems and distorted random variables. Finally, a nonparametricestimator for residual quantile extropy is provided. In order to evaluate of proposed estimator, we usea simulation study.
外性是对随机变量不确定性的度量。考虑到分位数函数在统计数据建模和分析中的广泛适用性,本文从残差寿命变量出发,研究了残差寿命变量的分位数型外向性,简称残差分位数外向性。与残差外向性函数不同,残差分位数外向性通过一个简单的关系唯一地决定了分位数密度函数。使用提出的不确定性分位数度量,推导了老化类别、随机顺序和表征结果。我们还提出了与(n i + 1)- of-n系统和扭曲随机变量相关的一些应用。最后给出了残差分位数熵的非参数估计。为了对所提出的估计器进行评估,我们进行了仿真研究。
{"title":"Analysis and Applications of Quantile Approach on Residual Extropy","authors":"A. H. Khammar, Vahideh Ahrari, Seyed Mahdi Amir Jahanshahi","doi":"10.19139/soic-2310-5070-1226","DOIUrl":"https://doi.org/10.19139/soic-2310-5070-1226","url":null,"abstract":"Extropy is a measure of the uncertainty of a random variable. Motivated with the wideapplicability of quantile functions in modeling and analyzing statistical data, in this paper, we studyquantile version of the extropy from residual lifetime variable, \"residual quantile extropy\" in short.Unlike the residual extropy function, the residual quantile extropy determines the quantile densityfunction uniquely through a simple relationship. Aging classes, stochastic orders and characterizationresults are derived, using proposed quantile measure of uncertainty. We also suggest some applicationsrelated to (n i + 1)-out-of-n systems and distorted random variables. Finally, a nonparametricestimator for residual quantile extropy is provided. In order to evaluate of proposed estimator, we usea simulation study.","PeriodicalId":131002,"journal":{"name":"Statistics, Optimization & Information Computing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}