Pub Date : 2023-09-02DOI: 10.18187/pjsor.v19i3.4123
Salwa L. Alkhayyat, Heba Soltan Mohamed, Nadeem Shafique Butt, H. Yousof, Emadeldin I. A. Ali
The autoregressive model is a representation of a certain kind of random process in statistics, insurance, signal processing, and econometrics; as such, it is used to describe some time-varying processes in nature, economics and insurance, etc. In this article, a novel version of the autoregressive model is proposed, in the so-called the partially autoregressive (PAR(1)) model. The results of the new approach depended on a new algorithm that we formulated to facilitate the process of statistical prediction in light of the rapid developments in time series models. The new algorithm is based on the values of the autocorrelation and partial autocorrelation functions. The new technique is assessed via re-estimating the actual time series values. Finally, the results of the PAR(1) model is compared with the Holt-Winters model under the Ljung-Box test and its corresponding p-value. A comprehensive analysis for the model residuals is presented. The matrix of the autocorrelation analysis for both points forecasting and interval forecasting are given with its relevant plots.
{"title":"Modeling the Asymmetric Reinsurance Revenues Data using the Partially Autoregressive Time Series Model: Statistical Forecasting and Residuals Analysis","authors":"Salwa L. Alkhayyat, Heba Soltan Mohamed, Nadeem Shafique Butt, H. Yousof, Emadeldin I. A. Ali","doi":"10.18187/pjsor.v19i3.4123","DOIUrl":"https://doi.org/10.18187/pjsor.v19i3.4123","url":null,"abstract":"The autoregressive model is a representation of a certain kind of random process in statistics, insurance, signal processing, and econometrics; as such, it is used to describe some time-varying processes in nature, economics and insurance, etc. In this article, a novel version of the autoregressive model is proposed, in the so-called the partially autoregressive (PAR(1)) model. The results of the new approach depended on a new algorithm that we formulated to facilitate the process of statistical prediction in light of the rapid developments in time series models. The new algorithm is based on the values of the autocorrelation and partial autocorrelation functions. The new technique is assessed via re-estimating the actual time series values. Finally, the results of the PAR(1) model is compared with the Holt-Winters model under the Ljung-Box test and its corresponding p-value. A comprehensive analysis for the model residuals is presented. The matrix of the autocorrelation analysis for both points forecasting and interval forecasting are given with its relevant plots.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48607408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3633
Mohamed K. A. Refaie, Nadeem Shafique Butt, Emadeldin I. A. Ali
In this work, we construct a three-parameter Chen modification that is flexible. The "J shape", "monotonically increasing", "U shape," and "upside down (reversed bathtub)" hazard rate forms are all supported by the new Chen extension's hazard rate. We derive pertinent statistical features. A few distributions of the bivariate kind are generated. For evaluating the model parameters, we took the maximum likelihood estimation approach into consideration. Maximal likelihood estimators are evaluated via graphical simulations. To demonstrate the applicability of the new approach, two genuine data sets are taken into consideration and examined. The Akaike Information criterion, Bayesian Information criterion, Cramer-von Mises criterion, Anderson-Darling criterion, Kolmogorov-Smirnov test, and its related p-value are used to evaluate the new model with a variety of popular competing models.
在这项工作中,我们构造了一个灵活的三参数Chen修正。“J形”、“单调递增”、“U形”和“倒置(倒置浴缸)”的危险率形式都得到了新陈扩展的危险率的支持。我们得出了相关的统计特征。生成了一些双变量类型的分布。为了评估模型参数,我们考虑了最大似然估计方法。最大似然估计量通过图形模拟进行评估。为了证明新方法的适用性,考虑并审查了两个真实的数据集。使用Akaike Information准则、Bayesian Information准则、Cramer von Mises准则、Anderson Darling准则、Kolmogorov Smirnov检验及其相关p值来评估新模型和各种流行的竞争模型。
{"title":"A new probability distribution: properties, copulas and applications in medicine and engineering","authors":"Mohamed K. A. Refaie, Nadeem Shafique Butt, Emadeldin I. A. Ali","doi":"10.18187/pjsor.v19i2.3633","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3633","url":null,"abstract":"In this work, we construct a three-parameter Chen modification that is flexible. The \"J shape\", \"monotonically increasing\", \"U shape,\" and \"upside down (reversed bathtub)\" hazard rate forms are all supported by the new Chen extension's hazard rate. We derive pertinent statistical features. A few distributions of the bivariate kind are generated. For evaluating the model parameters, we took the maximum likelihood estimation approach into consideration. Maximal likelihood estimators are evaluated via graphical simulations. To demonstrate the applicability of the new approach, two genuine data sets are taken into consideration and examined. The Akaike Information criterion, Bayesian Information criterion, Cramer-von Mises criterion, Anderson-Darling criterion, Kolmogorov-Smirnov test, and its related p-value are used to evaluate the new model with a variety of popular competing models.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43064856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4201
Teg Alam
The management of each industry must strive to meet multiple financial objectives, including capital structure, dividend policy, and earnings growth. The paper proposes an approach to analyze how financial resources should be allocated optimally using a multi-objective decision-making model. As part of the study, Al Rajhi banks' financial statements are used as a case study. All of the data is drawn from the banks' financial statements. Overall, the study's results show that all objectives have been achieved. This model enables banking and other industries to formulate strategies for dealing with various financial situations. The study's results are calculated and verified using LINGO 18.0 x64 software. Hence, the proposed model can determine financial decisions and develop strategies for dealing with various economic frameworks.
{"title":"Optimal Financial Resource Allocation Using Multiobjective Decision Making Model","authors":"Teg Alam","doi":"10.18187/pjsor.v19i2.4201","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4201","url":null,"abstract":"The management of each industry must strive to meet multiple financial objectives, including capital structure, dividend policy, and earnings growth. The paper proposes an approach to analyze how financial resources should be allocated optimally using a multi-objective decision-making model. As part of the study, Al Rajhi banks' financial statements are used as a case study. All of the data is drawn from the banks' financial statements. Overall, the study's results show that all objectives have been achieved. This model enables banking and other industries to formulate strategies for dealing with various financial situations. The study's results are calculated and verified using LINGO 18.0 x64 software. Hence, the proposed model can determine financial decisions and develop strategies for dealing with various economic frameworks.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44573480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4310
Mohamed K. A. Refaie, Emadeldin I. A. Ali
Depending on Yousof et al. (2017a), a new one parameter G family of distributions called the reciprocal Burr X-G family is defined and studied. Special member based on the well-known Burr type XII model called the reciprocal Burr X-Burr XII distribution is studied and analyzed. Relevant properties of the new family including ordinary moments, moment of the residual life, moment of the reversed residual life and incomplete moments are derived and some of them are numerically analyzed. Four different applications to real-life data sets are presented to illustrate the applicability and importance of the new family. The new family has proven to be highly capable and flexible in practical applications and statistical modeling of real data.
{"title":"A New Reciprocal System of Burr Type X Densities with Applications in Engineering, Reliability, Economy, and Medicine","authors":"Mohamed K. A. Refaie, Emadeldin I. A. Ali","doi":"10.18187/pjsor.v19i2.4310","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4310","url":null,"abstract":"Depending on Yousof et al. (2017a), a new one parameter G family of distributions called the reciprocal Burr X-G family is defined and studied. Special member based on the well-known Burr type XII model called the reciprocal Burr X-Burr XII distribution is studied and analyzed. Relevant properties of the new family including ordinary moments, moment of the residual life, moment of the reversed residual life and incomplete moments are derived and some of them are numerically analyzed. Four different applications to real-life data sets are presented to illustrate the applicability and importance of the new family. The new family has proven to be highly capable and flexible in practical applications and statistical modeling of real data.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45220402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3089
P. P. Oktaviana, K. Fithriasari
Indonesia is a country prone to earthquakes because it is located in the Pasific ring of fire area. The earthquakes caused a lot of damages and casualties. In this paper, we use Bayesian Simulation on Gutenberg Richter model and Copula method to estimate the risk parameters of earthquake, specifically the probability and the recurrence (return) period of an earthquake occurrence in Indonesia. Those risk parameters are estimated from dependence structure of frequency and magnitude of earthquakes. The dependence structure can be determined by using Gutenberg Richter model and Copula method. The Gutenberg Richter model is a model based on linear regression used to determine dependence structure, while the Copula method is a statistical method used to determine dependence structure that ignores linearity and normality assumptions of data. Bayesian Simulation is a method used to estimate parameters based on simulation. The data used is an annual data of frequency and magnitude (magnitude ≥ 4 Richter Scale) of earthquakes occur in Indonesia for 4 years from Meteorological, Climatological, and Geophysical Agency of Indonesia. There are several steps of analysis to be performed: firstly, we perform regression analysis of frequency and magnitude of the earthquakes to determine Gutenberg Richter Model; secondly, we perform Copula analysis; thirdly, we estimate probability and the recurrence (return) period of an earthquake occurrence using Bayesian Simulation based on the result of step one and two. The result indicates Bayesian Simulation can estimate risk parameters very well.
{"title":"Implementation of Bayesian Simulation for Earthquake Disaster Risk Analysis in Indonesia based on Gutenberg Richter Model and Copula Method","authors":"P. P. Oktaviana, K. Fithriasari","doi":"10.18187/pjsor.v19i2.3089","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3089","url":null,"abstract":"Indonesia is a country prone to earthquakes because it is located in the Pasific ring of fire area. The earthquakes caused a lot of damages and casualties. In this paper, we use Bayesian Simulation on Gutenberg Richter model and Copula method to estimate the risk parameters of earthquake, specifically the probability and the recurrence (return) period of an earthquake occurrence in Indonesia. Those risk parameters are estimated from dependence structure of frequency and magnitude of earthquakes. The dependence structure can be determined by using Gutenberg Richter model and Copula method. The Gutenberg Richter model is a model based on linear regression used to determine dependence structure, while the Copula method is a statistical method used to determine dependence structure that ignores linearity and normality assumptions of data. Bayesian Simulation is a method used to estimate parameters based on simulation. The data used is an annual data of frequency and magnitude (magnitude ≥ 4 Richter Scale) of earthquakes occur in Indonesia for 4 years from Meteorological, Climatological, and Geophysical Agency of Indonesia. There are several steps of analysis to be performed: firstly, we perform regression analysis of frequency and magnitude of the earthquakes to determine Gutenberg Richter Model; secondly, we perform Copula analysis; thirdly, we estimate probability and the recurrence (return) period of an earthquake occurrence using Bayesian Simulation based on the result of step one and two. The result indicates Bayesian Simulation can estimate risk parameters very well.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43790224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4233
Siloko Israel Uzuazor, Ojobor Sunday Amaju
Higher-order kernel estimation and kernel density derivative estimation are techniques for reducing the asymptotic mean integrated squared error in nonparametric kernel density estimation. A reduction in the error criterion is an indication of better performance. The estimation of kernel function relies greatly on bandwidth and the identified reduction methods in the literature are bandwidths reliant for their implementation. This study examines the performance of higher order kernel estimation and kernel density derivatives estimation techniques with reference to the Gaussian kernel estimator owing to its wide applicability in real-life-settings. The explicit expressions for the bandwidth selectors of the two techniques in relation to the Gaussian kernel and the bandwidths were accurately obtained. Empirical results using two data sets obviously revealed that kernel density derivative estimation outperformed the higher order kernel estimation excellently well with the asymptotic mean integrated squared error as the criterion function.
{"title":"A Comparative Study of Higher Order Kernel Estimation and Kernel Density Derivative Estimation of the Gaussian Kernel Estimator with Data Application","authors":"Siloko Israel Uzuazor, Ojobor Sunday Amaju","doi":"10.18187/pjsor.v19i2.4233","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4233","url":null,"abstract":"Higher-order kernel estimation and kernel density derivative estimation are techniques for reducing the asymptotic mean integrated squared error in nonparametric kernel density estimation. A reduction in the error criterion is an indication of better performance. The estimation of kernel function relies greatly on bandwidth and the identified reduction methods in the literature are bandwidths reliant for their implementation. This study examines the performance of higher order kernel estimation and kernel density derivatives estimation techniques with reference to the Gaussian kernel estimator owing to its wide applicability in real-life-settings. The explicit expressions for the bandwidth selectors of the two techniques in relation to the Gaussian kernel and the bandwidths were accurately obtained. Empirical results using two data sets obviously revealed that kernel density derivative estimation outperformed the higher order kernel estimation excellently well with the asymptotic mean integrated squared error as the criterion function.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43949141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3350
F. Bhatti, G. Hamedani, M. C. Korkmaz, H. Yousof, M. Ahmad
A new distribution with flexible hazard rate function is introduced which is called new modified Burr XII (NMBXII) distribution. The proposed distribution is derived from the T-X family technique and compounding the generalized Nadarajah–Haghighi (GNH) and gamma distributions. We highlighted the shapes of NMBXII density and failure rate functions. The density function of NMBXII model can take shapes such as J, reverse J, positively skewed and symmetrical. The proposed model can produce almost all types of failure rates such as increasing, decreasing, increasing-decreasing, decreasing-increasing, bimodal, inverted bathtub and modified bathtub. To show the importance of the proposed distribution, we established various mathematical properties such as quantiles, moments, incomplete moments, inequality measures, residual life functions and reliability measures theoretically. We have characterized the NMBXII distribution via two techniques. We addressed the maximum likelihood estimation technique for model parameters. The precision of the MLEs is estimated via a simulation study. We have considered three real data sets for applications to demonstrate the potentiality and utility of the NMBXII model. Then, we have established empirically that the proposed model is suitable for tax revenue, time periods between successive earthquakes and flood discharges applications. Finally, various model selection criteria, the goodness of fit statistics and graphical tools were used to examine the adequacy of the NMBXII distribution.
{"title":"On The New Modified Burr XII Distribution: Development, Properties, Characterizations and Applications","authors":"F. Bhatti, G. Hamedani, M. C. Korkmaz, H. Yousof, M. Ahmad","doi":"10.18187/pjsor.v19i2.3350","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3350","url":null,"abstract":"A new distribution with flexible hazard rate function is introduced which is called new modified Burr XII (NMBXII) distribution. The proposed distribution is derived from the T-X family technique and compounding the generalized Nadarajah–Haghighi (GNH) and gamma distributions. We highlighted the shapes of NMBXII density and failure rate functions. The density function of NMBXII model can take shapes such as J, reverse J, positively skewed and symmetrical. The proposed model can produce almost all types of failure rates such as increasing, decreasing, increasing-decreasing, decreasing-increasing, bimodal, inverted bathtub and modified bathtub. To show the importance of the proposed distribution, we established various mathematical properties such as quantiles, moments, incomplete moments, inequality measures, residual life functions and reliability measures theoretically. We have characterized the NMBXII distribution via two techniques. We addressed the maximum likelihood estimation technique for model parameters. The precision of the MLEs is estimated via a simulation study. We have considered three real data sets for applications to demonstrate the potentiality and utility of the NMBXII model. Then, we have established empirically that the proposed model is suitable for tax revenue, time periods between successive earthquakes and flood discharges applications. Finally, various model selection criteria, the goodness of fit statistics and graphical tools were used to examine the adequacy of the NMBXII distribution. ","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":"1 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41665753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.3922
R. Ganaie, V. Rajagopalan
In this paper, we have executed a new model of Power Quasi Lindley distribution known as Weighted Power Quasi Lindley distribution by introducing the weighted technique. We have also executed its various mathematical and statistical properties like order statistics, likelihood Ratio test, moments, harmonic mean, Income distribution curves, entropy and reliability measures. We also have discussed its parameter estimation by applying the method of maximum likelihood estimator and also we have obtained its Fisher’s information matrix. Finally, the applicability and potentiality of the new distribution in handling data has been investigated by executing the two real life data sets.
{"title":"The Weighted Power Quasi Lindley Distribution with Properties and Applications of Life-time Data","authors":"R. Ganaie, V. Rajagopalan","doi":"10.18187/pjsor.v19i2.3922","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.3922","url":null,"abstract":"In this paper, we have executed a new model of Power Quasi Lindley distribution known as Weighted Power Quasi Lindley distribution by introducing the weighted technique. We have also executed its various mathematical and statistical properties like order statistics, likelihood Ratio test, moments, harmonic mean, Income distribution curves, entropy and reliability measures. We also have discussed its parameter estimation by applying the method of maximum likelihood estimator and also we have obtained its Fisher’s information matrix. Finally, the applicability and potentiality of the new distribution in handling data has been investigated by executing the two real life data sets.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45245913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4311
G. Hamedani
Obeid and Kadry (2019) tried to study the product and quotient of the independent Pareto and Rayleigh random variables. The distributions they obtained are incorrect and in fact there is no closed form distributions as discussed here. We will mention the errors made and try to establish the correct versions of the probability density functions of these distributions for the truncated Pareto and Rayleigh random variables.
{"title":"Remarks on the Paper ”On the product and Quotient of Pareto and Rayleigh Random Variables”","authors":"G. Hamedani","doi":"10.18187/pjsor.v19i2.4311","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4311","url":null,"abstract":"Obeid and Kadry (2019) tried to study the product and quotient of the independent Pareto and Rayleigh random variables. The distributions they obtained are incorrect and in fact there is no closed form distributions as discussed here. We will mention the errors made and try to establish the correct versions of the probability density functions of these distributions for the truncated Pareto and Rayleigh random variables.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46579520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-02DOI: 10.18187/pjsor.v19i2.4130
B. Seal, P. Banerjee, Shreya Bhunia, S. Ghosh
Estimation of unknown parameters using different loss functions encompasses a major area in the decision theory. Specifically, distance loss functions are preferable as it measures the discrepancies between two probability density functions from the same family indexed by different parameters. In this article, Hellinger distance loss function is considered for scale parameter λ of two-parameter Rayleigh distribution. After simplifications, form of loss is obtained and that is meaningful if parameter is not large and Bayes estimate of λ is calculated under that loss function. So, the Bayes estimate may be termed as ‘Pseudo Bayes estimate’ with respect to the actual Hellinger distance loss function as it is obtained using approximations to actual loss. To compare the performance of the estimator under these loss functions, we also consider weighted squared error loss function (WSELF) which is usually used for the estimation of the scale parameter. An extensive simulation is carried out to study the behaviour of the Bayes estimators under the three different loss functions, i.e. simplified, actual and WSE loss functions. From the numericalresults it is found that the estimators perform well under the Hellinger distance loss function in comparison with the traditionally used WSELF. Also, we demonstrate the methodology by analyzing two real-life datasets.
{"title":"Bayesian Estimation in Rayleigh Distribution under a Distance Type Loss Function","authors":"B. Seal, P. Banerjee, Shreya Bhunia, S. Ghosh","doi":"10.18187/pjsor.v19i2.4130","DOIUrl":"https://doi.org/10.18187/pjsor.v19i2.4130","url":null,"abstract":"Estimation of unknown parameters using different loss functions encompasses a major area in the decision theory. Specifically, distance loss functions are preferable as it measures the discrepancies between two probability density functions from the same family indexed by different parameters. In this article, Hellinger distance loss function is considered for scale parameter λ of two-parameter Rayleigh distribution. After simplifications, form of loss is obtained and that is meaningful if parameter is not large and Bayes estimate of λ is calculated under that loss function. So, the Bayes estimate may be termed as ‘Pseudo Bayes estimate’ with respect to the actual Hellinger distance loss function as it is obtained using approximations to actual loss. To compare the performance of the estimator under these loss functions, we also consider weighted squared error loss function (WSELF) which is usually used for the estimation of the scale parameter. An extensive simulation is carried out to study the behaviour of the Bayes estimators under the three different loss functions, i.e. simplified, actual and WSE loss functions. From the numericalresults it is found that the estimators perform well under the Hellinger distance loss function in comparison with the traditionally used WSELF. Also, we demonstrate the methodology by analyzing two real-life datasets.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46830071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}