Pub Date : 2023-09-03DOI: 10.18187/pjsor.v19i3.2877
Preeti Patidar, H. P. Singh
In this paper we have suggested a class of estimators of population mean of sensitive variable under optional randomized response technique as reported in Gupta et al  (2014). We have obtained the mean squared error (MSE) of the suggested class of estimators up to the first order of approximation. The optimum conditions are obtained at which the (MSE) of the proposed class of estimators is minimum. An empirical study is carried out to show the performance of the suggested class of estimators over existing estimators .It is found that the performance of proposed class of estimators is better than the existing estimators including Grover and Kaur (2019).
{"title":"An Improved Class of Estimators Of Population Mean of Sensitive Variable Using Optional Randomized Response Technique","authors":"Preeti Patidar, H. P. Singh","doi":"10.18187/pjsor.v19i3.2877","DOIUrl":"https://doi.org/10.18187/pjsor.v19i3.2877","url":null,"abstract":"In this paper we have suggested a class of estimators of population mean of sensitive variable under optional randomized response technique as reported in Gupta et al  (2014). We have obtained the mean squared error (MSE) of the suggested class of estimators up to the first order of approximation. The optimum conditions are obtained at which the (MSE) of the proposed class of estimators is minimum. An empirical study is carried out to show the performance of the suggested class of estimators over existing estimators .It is found that the performance of proposed class of estimators is better than the existing estimators including Grover and Kaur (2019).","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-03DOI: 10.18187/pjsor.v19i3.3885
Dimpal Pathak, P. Hazarika, Subrata Chakraborty, Jondeep Das, G. G. Hamedani
This paper considers a new family of the trimodal skew logistic distributions. Some properties of this distribution, including moments, moments generating function, entropy, maximum likelihood estimates of parameters and some other properties, are presented. A simulation study is conducted to examine the performance of the parameters. Numerical optimization is carried out via two real-life datasets. Results show that the new distribution is better fitted in terms of these datasets among logistic, skew logistic and alpha skew logistic distributions based on the value of AIC and BIC.
{"title":"Modeling Tri-Model Data With a New Skew Logistic Distribution","authors":"Dimpal Pathak, P. Hazarika, Subrata Chakraborty, Jondeep Das, G. G. Hamedani","doi":"10.18187/pjsor.v19i3.3885","DOIUrl":"https://doi.org/10.18187/pjsor.v19i3.3885","url":null,"abstract":"This paper considers a new family of the trimodal skew logistic distributions. Some properties of this distribution, including moments, moments generating function, entropy, maximum likelihood estimates of parameters and some other properties, are presented. A simulation study is conducted to examine the performance of the parameters. Numerical optimization is carried out via two real-life datasets. Results show that the new distribution is better fitted in terms of these datasets among logistic, skew logistic and alpha skew logistic distributions based on the value of AIC and BIC.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43378624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, separate and combined ratio type estimators have been proposed in presence of non-response for estimating the population mean under stratified random sampling when the non-response occurs both on study and the auxiliary variables and the population mean of the auxiliary variable is unknown. The expressions for the biases and mean square errors (MSEs) of the proposed estimators have been derived to the first order of approximation. The proposed estimators have been compared with the other existing estimators using MSE criterion, and the condition under which the proposed estimators perform better than existing estimators have been obtained. In addition to the theoretical research, an empirical study was conducted.
{"title":"Assessing the Effect of Non-response in Stratified Random Sampling using Enhanced Ratio Type Estimators under Double Sampling Strategy.","authors":"Zakir Hussain Wani, S.E.H. Rizvi, 𝑛𝑥̅ 𝑠𝑡∗, 𝛾, 𝑛 𝑛′𝑥̅−, 𝑦̅ 𝑧𝑟𝑐𝑝∗, 𝑦̅, 𝑠𝑡∗, 𝑛 ′ − 𝑛 𝑛𝑥̅ 𝑠𝑡, 𝑥̅ 𝑠𝑡𝑆, 𝑥̅ 𝑠𝑡∗′, 𝑠𝑡𝑆, 𝑋̅, 𝑛 1𝑛′−, 1 𝑋̅, 𝑡, 𝑡 𝜉1𝑠𝑡∗′−, 𝜉, 𝑋̅ − 𝑋̅, − 𝑌̅ 𝑌̅, 𝑡 −, 𝜉 1𝑠𝑡∗, 𝑡 𝜉1𝑠𝑡∗′, 𝑦 𝑧𝑟𝑐𝑝∗","doi":"10.18187/pjsor.v19i3.4063","DOIUrl":"https://doi.org/10.18187/pjsor.v19i3.4063","url":null,"abstract":"In this paper, separate and combined ratio type estimators have been proposed in presence of non-response for estimating the population mean under stratified random sampling when the non-response occurs both on study and the auxiliary variables and the population mean of the auxiliary variable is unknown. The expressions for the biases and mean square errors (MSEs) of the proposed estimators have been derived to the first order of approximation. The proposed estimators have been compared with the other existing estimators using MSE criterion, and the condition under which the proposed estimators perform better than existing estimators have been obtained. In addition to the theoretical research, an empirical study was conducted.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44554927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-02DOI: 10.18187/pjsor.v19i3.4123
Salwa L. Alkhayyat, Heba Soltan Mohamed, Nadeem Shafique Butt, H. Yousof, Emadeldin I. A. Ali
The autoregressive model is a representation of a certain kind of random process in statistics, insurance, signal processing, and econometrics; as such, it is used to describe some time-varying processes in nature, economics and insurance, etc. In this article, a novel version of the autoregressive model is proposed, in the so-called the partially autoregressive (PAR(1)) model. The results of the new approach depended on a new algorithm that we formulated to facilitate the process of statistical prediction in light of the rapid developments in time series models. The new algorithm is based on the values of the autocorrelation and partial autocorrelation functions. The new technique is assessed via re-estimating the actual time series values. Finally, the results of the PAR(1) model is compared with the Holt-Winters model under the Ljung-Box test and its corresponding p-value. A comprehensive analysis for the model residuals is presented. The matrix of the autocorrelation analysis for both points forecasting and interval forecasting are given with its relevant plots.
{"title":"Modeling the Asymmetric Reinsurance Revenues Data using the Partially Autoregressive Time Series Model: Statistical Forecasting and Residuals Analysis","authors":"Salwa L. Alkhayyat, Heba Soltan Mohamed, Nadeem Shafique Butt, H. Yousof, Emadeldin I. A. Ali","doi":"10.18187/pjsor.v19i3.4123","DOIUrl":"https://doi.org/10.18187/pjsor.v19i3.4123","url":null,"abstract":"The autoregressive model is a representation of a certain kind of random process in statistics, insurance, signal processing, and econometrics; as such, it is used to describe some time-varying processes in nature, economics and insurance, etc. In this article, a novel version of the autoregressive model is proposed, in the so-called the partially autoregressive (PAR(1)) model. The results of the new approach depended on a new algorithm that we formulated to facilitate the process of statistical prediction in light of the rapid developments in time series models. The new algorithm is based on the values of the autocorrelation and partial autocorrelation functions. The new technique is assessed via re-estimating the actual time series values. Finally, the results of the PAR(1) model is compared with the Holt-Winters model under the Ljung-Box test and its corresponding p-value. A comprehensive analysis for the model residuals is presented. The matrix of the autocorrelation analysis for both points forecasting and interval forecasting are given with its relevant plots.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48607408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3633
Mohamed K. A. Refaie, Nadeem Shafique Butt, Emadeldin I. A. Ali
In this work, we construct a three-parameter Chen modification that is flexible. The "J shape", "monotonically increasing", "U shape," and "upside down (reversed bathtub)" hazard rate forms are all supported by the new Chen extension's hazard rate. We derive pertinent statistical features. A few distributions of the bivariate kind are generated. For evaluating the model parameters, we took the maximum likelihood estimation approach into consideration. Maximal likelihood estimators are evaluated via graphical simulations. To demonstrate the applicability of the new approach, two genuine data sets are taken into consideration and examined. The Akaike Information criterion, Bayesian Information criterion, Cramer-von Mises criterion, Anderson-Darling criterion, Kolmogorov-Smirnov test, and its related p-value are used to evaluate the new model with a variety of popular competing models.
在这项工作中,我们构造了一个灵活的三参数Chen修正。“J形”、“单调递增”、“U形”和“倒置(倒置浴缸)”的危险率形式都得到了新陈扩展的危险率的支持。我们得出了相关的统计特征。生成了一些双变量类型的分布。为了评估模型参数,我们考虑了最大似然估计方法。最大似然估计量通过图形模拟进行评估。为了证明新方法的适用性,考虑并审查了两个真实的数据集。使用Akaike Information准则、Bayesian Information准则、Cramer von Mises准则、Anderson Darling准则、Kolmogorov Smirnov检验及其相关p值来评估新模型和各种流行的竞争模型。
{"title":"A new probability distribution: properties, copulas and applications in medicine and engineering","authors":"Mohamed K. A. Refaie, Nadeem Shafique Butt, Emadeldin I. A. Ali","doi":"10.18187/pjsor.v19i2.3633","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3633","url":null,"abstract":"In this work, we construct a three-parameter Chen modification that is flexible. The \"J shape\", \"monotonically increasing\", \"U shape,\" and \"upside down (reversed bathtub)\" hazard rate forms are all supported by the new Chen extension's hazard rate. We derive pertinent statistical features. A few distributions of the bivariate kind are generated. For evaluating the model parameters, we took the maximum likelihood estimation approach into consideration. Maximal likelihood estimators are evaluated via graphical simulations. To demonstrate the applicability of the new approach, two genuine data sets are taken into consideration and examined. The Akaike Information criterion, Bayesian Information criterion, Cramer-von Mises criterion, Anderson-Darling criterion, Kolmogorov-Smirnov test, and its related p-value are used to evaluate the new model with a variety of popular competing models.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43064856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4310
Mohamed K. A. Refaie, Emadeldin I. A. Ali
Depending on Yousof et al. (2017a), a new one parameter G family of distributions called the reciprocal Burr X-G family is defined and studied. Special member based on the well-known Burr type XII model called the reciprocal Burr X-Burr XII distribution is studied and analyzed. Relevant properties of the new family including ordinary moments, moment of the residual life, moment of the reversed residual life and incomplete moments are derived and some of them are numerically analyzed. Four different applications to real-life data sets are presented to illustrate the applicability and importance of the new family. The new family has proven to be highly capable and flexible in practical applications and statistical modeling of real data.
{"title":"A New Reciprocal System of Burr Type X Densities with Applications in Engineering, Reliability, Economy, and Medicine","authors":"Mohamed K. A. Refaie, Emadeldin I. A. Ali","doi":"10.18187/pjsor.v19i2.4310","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4310","url":null,"abstract":"Depending on Yousof et al. (2017a), a new one parameter G family of distributions called the reciprocal Burr X-G family is defined and studied. Special member based on the well-known Burr type XII model called the reciprocal Burr X-Burr XII distribution is studied and analyzed. Relevant properties of the new family including ordinary moments, moment of the residual life, moment of the reversed residual life and incomplete moments are derived and some of them are numerically analyzed. Four different applications to real-life data sets are presented to illustrate the applicability and importance of the new family. The new family has proven to be highly capable and flexible in practical applications and statistical modeling of real data.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45220402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4201
Teg Alam
The management of each industry must strive to meet multiple financial objectives, including capital structure, dividend policy, and earnings growth. The paper proposes an approach to analyze how financial resources should be allocated optimally using a multi-objective decision-making model. As part of the study, Al Rajhi banks' financial statements are used as a case study. All of the data is drawn from the banks' financial statements. Overall, the study's results show that all objectives have been achieved. This model enables banking and other industries to formulate strategies for dealing with various financial situations. The study's results are calculated and verified using LINGO 18.0 x64 software. Hence, the proposed model can determine financial decisions and develop strategies for dealing with various economic frameworks.
{"title":"Optimal Financial Resource Allocation Using Multiobjective Decision Making Model","authors":"Teg Alam","doi":"10.18187/pjsor.v19i2.4201","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4201","url":null,"abstract":"The management of each industry must strive to meet multiple financial objectives, including capital structure, dividend policy, and earnings growth. The paper proposes an approach to analyze how financial resources should be allocated optimally using a multi-objective decision-making model. As part of the study, Al Rajhi banks' financial statements are used as a case study. All of the data is drawn from the banks' financial statements. Overall, the study's results show that all objectives have been achieved. This model enables banking and other industries to formulate strategies for dealing with various financial situations. The study's results are calculated and verified using LINGO 18.0 x64 software. Hence, the proposed model can determine financial decisions and develop strategies for dealing with various economic frameworks.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44573480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4233
Siloko Israel Uzuazor, Ojobor Sunday Amaju
Higher-order kernel estimation and kernel density derivative estimation are techniques for reducing the asymptotic mean integrated squared error in nonparametric kernel density estimation. A reduction in the error criterion is an indication of better performance. The estimation of kernel function relies greatly on bandwidth and the identified reduction methods in the literature are bandwidths reliant for their implementation. This study examines the performance of higher order kernel estimation and kernel density derivatives estimation techniques with reference to the Gaussian kernel estimator owing to its wide applicability in real-life-settings. The explicit expressions for the bandwidth selectors of the two techniques in relation to the Gaussian kernel and the bandwidths were accurately obtained. Empirical results using two data sets obviously revealed that kernel density derivative estimation outperformed the higher order kernel estimation excellently well with the asymptotic mean integrated squared error as the criterion function.
{"title":"A Comparative Study of Higher Order Kernel Estimation and Kernel Density Derivative Estimation of the Gaussian Kernel Estimator with Data Application","authors":"Siloko Israel Uzuazor, Ojobor Sunday Amaju","doi":"10.18187/pjsor.v19i2.4233","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4233","url":null,"abstract":"Higher-order kernel estimation and kernel density derivative estimation are techniques for reducing the asymptotic mean integrated squared error in nonparametric kernel density estimation. A reduction in the error criterion is an indication of better performance. The estimation of kernel function relies greatly on bandwidth and the identified reduction methods in the literature are bandwidths reliant for their implementation. This study examines the performance of higher order kernel estimation and kernel density derivatives estimation techniques with reference to the Gaussian kernel estimator owing to its wide applicability in real-life-settings. The explicit expressions for the bandwidth selectors of the two techniques in relation to the Gaussian kernel and the bandwidths were accurately obtained. Empirical results using two data sets obviously revealed that kernel density derivative estimation outperformed the higher order kernel estimation excellently well with the asymptotic mean integrated squared error as the criterion function.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43949141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3089
P. P. Oktaviana, K. Fithriasari
Indonesia is a country prone to earthquakes because it is located in the Pasific ring of fire area. The earthquakes caused a lot of damages and casualties. In this paper, we use Bayesian Simulation on Gutenberg Richter model and Copula method to estimate the risk parameters of earthquake, specifically the probability and the recurrence (return) period of an earthquake occurrence in Indonesia. Those risk parameters are estimated from dependence structure of frequency and magnitude of earthquakes. The dependence structure can be determined by using Gutenberg Richter model and Copula method. The Gutenberg Richter model is a model based on linear regression used to determine dependence structure, while the Copula method is a statistical method used to determine dependence structure that ignores linearity and normality assumptions of data. Bayesian Simulation is a method used to estimate parameters based on simulation. The data used is an annual data of frequency and magnitude (magnitude ≥ 4 Richter Scale) of earthquakes occur in Indonesia for 4 years from Meteorological, Climatological, and Geophysical Agency of Indonesia. There are several steps of analysis to be performed: firstly, we perform regression analysis of frequency and magnitude of the earthquakes to determine Gutenberg Richter Model; secondly, we perform Copula analysis; thirdly, we estimate probability and the recurrence (return) period of an earthquake occurrence using Bayesian Simulation based on the result of step one and two. The result indicates Bayesian Simulation can estimate risk parameters very well.
{"title":"Implementation of Bayesian Simulation for Earthquake Disaster Risk Analysis in Indonesia based on Gutenberg Richter Model and Copula Method","authors":"P. P. Oktaviana, K. Fithriasari","doi":"10.18187/pjsor.v19i2.3089","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3089","url":null,"abstract":"Indonesia is a country prone to earthquakes because it is located in the Pasific ring of fire area. The earthquakes caused a lot of damages and casualties. In this paper, we use Bayesian Simulation on Gutenberg Richter model and Copula method to estimate the risk parameters of earthquake, specifically the probability and the recurrence (return) period of an earthquake occurrence in Indonesia. Those risk parameters are estimated from dependence structure of frequency and magnitude of earthquakes. The dependence structure can be determined by using Gutenberg Richter model and Copula method. The Gutenberg Richter model is a model based on linear regression used to determine dependence structure, while the Copula method is a statistical method used to determine dependence structure that ignores linearity and normality assumptions of data. Bayesian Simulation is a method used to estimate parameters based on simulation. The data used is an annual data of frequency and magnitude (magnitude ≥ 4 Richter Scale) of earthquakes occur in Indonesia for 4 years from Meteorological, Climatological, and Geophysical Agency of Indonesia. There are several steps of analysis to be performed: firstly, we perform regression analysis of frequency and magnitude of the earthquakes to determine Gutenberg Richter Model; secondly, we perform Copula analysis; thirdly, we estimate probability and the recurrence (return) period of an earthquake occurrence using Bayesian Simulation based on the result of step one and two. The result indicates Bayesian Simulation can estimate risk parameters very well.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43790224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3922
R. Ganaie, V. Rajagopalan
In this paper, we have executed a new model of Power Quasi Lindley distribution known as Weighted Power Quasi Lindley distribution by introducing the weighted technique. We have also executed its various mathematical and statistical properties like order statistics, likelihood Ratio test, moments, harmonic mean, Income distribution curves, entropy and reliability measures. We also have discussed its parameter estimation by applying the method of maximum likelihood estimator and also we have obtained its Fisher’s information matrix. Finally, the applicability and potentiality of the new distribution in handling data has been investigated by executing the two real life data sets.
{"title":"The Weighted Power Quasi Lindley Distribution with Properties and Applications of Life-time Data","authors":"R. Ganaie, V. Rajagopalan","doi":"10.18187/pjsor.v19i2.3922","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3922","url":null,"abstract":"In this paper, we have executed a new model of Power Quasi Lindley distribution known as Weighted Power Quasi Lindley distribution by introducing the weighted technique. We have also executed its various mathematical and statistical properties like order statistics, likelihood Ratio test, moments, harmonic mean, Income distribution curves, entropy and reliability measures. We also have discussed its parameter estimation by applying the method of maximum likelihood estimator and also we have obtained its Fisher’s information matrix. Finally, the applicability and potentiality of the new distribution in handling data has been investigated by executing the two real life data sets.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45245913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}