Pub Date : 2022-12-06DOI: 10.18187/pjsor.v18i4.4212
N. Alam, N. Ramli, Adie Safian Ton Mohamed, Noor Izyan Mohamad Adnan
Fuzzy time series is widely used in forecasting time series data in linguistic forms. Implementing the intuitionistic fuzzy sets (IFS) in fuzzy time series can better handle uncertainties and vagueness in the time series data. However, the time series data always fluctuate randomly and cause drastic changes. In this study, the 4253HT smoother is integrated with the intuitionistic fuzzy time series forecasting model to improve the forecasting accuracy. The proposed model is implemented in predicting the Malaysian crude palm oil prices. The data are firstly smoothed, and followed with the fuzzification process. Next are the transformation of fuzzy sets into IFS and the de-i-fuzzification via equal distribution of hesitancy. The forecasted data are calculated based on the defuzzified values considering the new membership degrees of the IFS after de-i-fuzzification. The results show that the integrated model produces a better forecasting performance compared to the common intuitionistic fuzzy time series forecasting model. In the future, the integration of the data smoothing should be considered before the forecasting of data using fuzzy time series could be performed.
{"title":"Integration of 4253HT Smoother with Intuitionistic Fuzzy Time Series Forecasting Model","authors":"N. Alam, N. Ramli, Adie Safian Ton Mohamed, Noor Izyan Mohamad Adnan","doi":"10.18187/pjsor.v18i4.4212","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4212","url":null,"abstract":"Fuzzy time series is widely used in forecasting time series data in linguistic forms. Implementing the intuitionistic fuzzy sets (IFS) in fuzzy time series can better handle uncertainties and vagueness in the time series data. However, the time series data always fluctuate randomly and cause drastic changes. In this study, the 4253HT smoother is integrated with the intuitionistic fuzzy time series forecasting model to improve the forecasting accuracy. The proposed model is implemented in predicting the Malaysian crude palm oil prices. The data are firstly smoothed, and followed with the fuzzification process. Next are the transformation of fuzzy sets into IFS and the de-i-fuzzification via equal distribution of hesitancy. The forecasted data are calculated based on the defuzzified values considering the new membership degrees of the IFS after de-i-fuzzification. The results show that the integrated model produces a better forecasting performance compared to the common intuitionistic fuzzy time series forecasting model. In the future, the integration of the data smoothing should be considered before the forecasting of data using fuzzy time series could be performed.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41572424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-06DOI: 10.18187/pjsor.v18i4.4190
Basma J. Ahmed, C. Chesneau, M. M. Ali, H. Yousof
When a life test is terminated at a predetermined time to decide whether to accept or refuse the submitted batches, the types of group sampling inspection plans (single, two, and multiple-stages) are introduced. The tables in this study give the optimal number of groups for various confidence levels, examination limits, and values of the ratio of the determined experiment time to the fixed percentile life. At various quality levels, the operating characteristic functions and accompanying producer's risk are derived for various types of group sampling inspection plans. At the determined producer's risk, the optimal ratios of real percentile life to a fixed percentile life are obtained. Three case studies are provided to illustrate the processes described here. Comparisons of single-stage and iterative group sampling plans are introduced. The first, second, and third sample minimums must be used to guarantee that the product's stipulated mean and median lifetimes are reached at a certain degree of customer trust. The suggested sample plans' operational characteristic values and the producer's risk are given. In order to show how the suggested approaches based on the mean life span and median life span of the product may function in reality, certain real-world examples are examined.
{"title":"Amputated Life Testing for Weibull-Fréchet Percentiles: Single, Double and Multiple Group Sampling Inspection Plans with Applications","authors":"Basma J. Ahmed, C. Chesneau, M. M. Ali, H. Yousof","doi":"10.18187/pjsor.v18i4.4190","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4190","url":null,"abstract":"When a life test is terminated at a predetermined time to decide whether to accept or refuse the submitted batches, the types of group sampling inspection plans (single, two, and multiple-stages) are introduced. The tables in this study give the optimal number of groups for various confidence levels, examination limits, and values of the ratio of the determined experiment time to the fixed percentile life. At various quality levels, the operating characteristic functions and accompanying producer's risk are derived for various types of group sampling inspection plans. At the determined producer's risk, the optimal ratios of real percentile life to a fixed percentile life are obtained. Three case studies are provided to illustrate the processes described here. Comparisons of single-stage and iterative group sampling plans are introduced. The first, second, and third sample minimums must be used to guarantee that the product's stipulated mean and median lifetimes are reached at a certain degree of customer trust. The suggested sample plans' operational characteristic values and the producer's risk are given. In order to show how the suggested approaches based on the mean life span and median life span of the product may function in reality, certain real-world examples are examined.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43914024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Financial crisis prediction is a critical issue in the economic phenomenon. Correct predictions can provide the knowledge for stakeholders to make policies to preserve and increase economic stability. Several approaches for predicting the financial crisis have been developed. However, the classification model's performance and prediction accuracy, as well as legal data, are insufficient for usage in real applications. So that, an efficient prediction model is required for higher performance results. This paper adopts a novel two-hybrid intelligent prediction model using an Artificial Neural Network (ANN) for prediction and Particle Swarm Optimization (PSO) for optimization. At first, a PSO technique produces the hyperparameter value for ANN to fit the best architecture. They are weights and thresholds. Then, they are used to predict the performance of the given dataset. In the end, ANN-PSO generates predictions value of crisis conditions. The proposed ANN-PSO model is implemented on time series data of economic conditions in Indonesia. Dataset was obtained from International Monetary Fund and the Indonesian Economic and Financial Statistics. Independent variable data using 13 potential indicators, namely imports, exports, trade exchange rates, foreign exchange reserves, the composite stock price index, real exchange rates, real deposit rates, bank deposits, loan and deposit interest rates, the difference between the real BI rate and the real FED rate, the M1, M2 multiplier, and the ratio of M2 to foreign exchange reserves. Meanwhile, the dependent variable uses the perfect signal value based on the Financial Pressure Index. A detailed statistical analysis of the dataset is also given by threshold value to convey crisis conditions. Experimental analysis shows that the proposed model is reliable based on the different evaluation criteria. The case studies show that the result for predictive data is basically consistent with the actual situation, which has greatly helped the prediction of a financial crisis.
{"title":"An Intelligent Hybrid Model Using Artificial Neural Networks and Particle Swarm Optimization Technique For Financial Crisis Prediction","authors":"Maryam Maryam, Dimas Aryo Anggoro, Muhibah Fata Tika, Fitri Cahya Kusumawati","doi":"10.18187/pjsor.v18i4.3927","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.3927","url":null,"abstract":"Financial crisis prediction is a critical issue in the economic phenomenon. Correct predictions can provide the knowledge for stakeholders to make policies to preserve and increase economic stability. Several approaches for predicting the financial crisis have been developed. However, the classification model's performance and prediction accuracy, as well as legal data, are insufficient for usage in real applications. So that, an efficient prediction model is required for higher performance results. This paper adopts a novel two-hybrid intelligent prediction model using an Artificial Neural Network (ANN) for prediction and Particle Swarm Optimization (PSO) for optimization. At first, a PSO technique produces the hyperparameter value for ANN to fit the best architecture. They are weights and thresholds. Then, they are used to predict the performance of the given dataset. In the end, ANN-PSO generates predictions value of crisis conditions. The proposed ANN-PSO model is implemented on time series data of economic conditions in Indonesia. Dataset was obtained from International Monetary Fund and the Indonesian Economic and Financial Statistics. Independent variable data using 13 potential indicators, namely imports, exports, trade exchange rates, foreign exchange reserves, the composite stock price index, real exchange rates, real deposit rates, bank deposits, loan and deposit interest rates, the difference between the real BI rate and the real FED rate, the M1, M2 multiplier, and the ratio of M2 to foreign exchange reserves. Meanwhile, the dependent variable uses the perfect signal value based on the Financial Pressure Index. A detailed statistical analysis of the dataset is also given by threshold value to convey crisis conditions. Experimental analysis shows that the proposed model is reliable based on the different evaluation criteria. The case studies show that the result for predictive data is basically consistent with the actual situation, which has greatly helped the prediction of a financial crisis. ","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41462163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-06DOI: 10.18187/pjsor.v18i4.4106
B. Hosseini, M. Afshari, M. Alizadeh, A. Afify
The choice of the most suitable statistical distribution for modeling data is very important. Generally, the new distributions are more flexible to model real data that present a high degree of skewness and kurtosis. In this paper, we define a new one-parameter lifetime distribution, so-called weighted-Lindley distribution. Some of its basic properties are investigated. Some classical and Bayesian methods of estimation have been used for estimating its parameter. The behavior of these estimators were investigated by a graphical simulation study. A real data set is analyzed to investigate the flexibility of the new weighted-Lindley distribution.
{"title":"A New Weighted-Lindley Distribution: Properties, Classical and Bayesian Estimation with an Application","authors":"B. Hosseini, M. Afshari, M. Alizadeh, A. Afify","doi":"10.18187/pjsor.v18i4.4106","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4106","url":null,"abstract":"The choice of the most suitable statistical distribution for modeling data is very important. Generally, the new distributions are more flexible to model real data that present a high degree of skewness and kurtosis. In this paper, we define a new one-parameter lifetime distribution, so-called weighted-Lindley distribution. Some of its basic properties are investigated. Some classical and Bayesian methods of estimation have been used for estimating its parameter. The behavior of these estimators were investigated by a graphical simulation study. A real data set is analyzed to investigate the flexibility of the new weighted-Lindley distribution.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45687830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-06DOI: 10.18187/pjsor.v18i4.4059
Muntazir Mehdi, M. Aslam, N. Feroze
Transmuted distributions have been centered of focus for researchers recently due to their flexibility and applicability in statistics. However, the only few contributions have considered estimation for mixture of transmuted lifetime models especially under Bayesian methods has been explored more recently. We have considered the Bayesian estimation of transmuted Lomax mixture model (TLMM) for type-I censored samples. The Bayes estimates (BEs) for informative and non-informative priors. The BEs and posterior risks (PRs) are evaluated using four different loss functions (LFs), two symmetric and two asymmetric, namely the squared error loss function (SELF), precautionary loss function (PLF), weighted balance loss function (WBLF), and general entropy loss function (GELF). Simulations are run using Lindley Approximation method to compare the BEs under various sample sizes and censoring rates. The estimates under informative prior and GELF were found superior to their counterparts. The applicability of the proposed estimates has been illustrated using the analysis of a real data regarding type-I censored failure times of windshields airplanes.
{"title":"Bayesian Estimation of Transmuted Lomax Mixture Model with an Application to Type-I Censored Windshield Data","authors":"Muntazir Mehdi, M. Aslam, N. Feroze","doi":"10.18187/pjsor.v18i4.4059","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4059","url":null,"abstract":"Transmuted distributions have been centered of focus for researchers recently due to their flexibility and applicability in statistics. However, the only few contributions have considered estimation for mixture of transmuted lifetime models especially under Bayesian methods has been explored more recently. We have considered the Bayesian estimation of transmuted Lomax mixture model (TLMM) for type-I censored samples. The Bayes estimates (BEs) for informative and non-informative priors. The BEs and posterior risks (PRs) are evaluated using four different loss functions (LFs), two symmetric and two asymmetric, namely the squared error loss function (SELF), precautionary loss function (PLF), weighted balance loss function (WBLF), and general entropy loss function (GELF). Simulations are run using Lindley Approximation method to compare the BEs under various sample sizes and censoring rates. The estimates under informative prior and GELF were found superior to their counterparts. The applicability of the proposed estimates has been illustrated using the analysis of a real data regarding type-I censored failure times of windshields airplanes.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47060269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.18187/pjsor.v18i4.3969
A. Alenazi
The paper considers the class of $f$-divergence regression models as alternatives to parametric regression models for compositional data. The special cases examined in this paper include the Jensen-Shannon, Kullback-Leibler, Hellinger, chi^2 and total variation divergence. Strong advantages of the proposed regression models are a) the absence of parametric assumptions and b) the ability to treat zero values (which commonly occur in practice) naturally. Extensive Monte Carlo simulation studies comparatively assess the performance of the models in terms of bias and an empirical evaluation using real data examining further aspects, such as predictive performance and computational cost. The results reveal that Kullback-Leibler and Jensen-Shannon divergence regression models exhibited high quality performance in multiple directions. Ultimately, penalised versions of the Kullback-Leibler divergence regression are introduced and illustrated using real data rendering this model the optimal model to utilise in practice.
{"title":"f-divergence regression models for compositional data","authors":"A. Alenazi","doi":"10.18187/pjsor.v18i4.3969","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.3969","url":null,"abstract":"The paper considers the class of $f$-divergence regression models as alternatives to parametric regression models for compositional data. The special cases examined in this paper include the Jensen-Shannon, Kullback-Leibler, Hellinger, chi^2 and total variation divergence. Strong advantages of the proposed regression models are a) the absence of parametric assumptions and b) the ability to treat zero values (which commonly occur in practice) naturally. Extensive Monte Carlo simulation studies comparatively assess the performance of the models in terms of bias and an empirical evaluation using real data examining further aspects, such as predictive performance and computational cost. The results reveal that Kullback-Leibler and Jensen-Shannon divergence regression models exhibited high quality performance in multiple directions. Ultimately, penalised versions of the Kullback-Leibler divergence regression are introduced and illustrated using real data rendering this model the optimal model to utilise in practice.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":"1 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41360111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.18187/pjsor.v18i4.3600
M. Ibrahim, M. M. Ali, H. Goual, H. Yousof
After studying the mathematical properties of the Double Burr XII model, we present Bayesian and non-Bayesian estimation for its unknown parameters. Also, we constructed a new statistical test for goodness-of-fit in case of complete and censored samples. The modified test is developed based on the Nikulin-Rao-Robson statistic for validation. Simulations are performed for assessing the new test along with nine applications on real data.
{"title":"The Double Burr Type XII Model: Censored and Uncensored Validation Using a New Nikulin-Rao-Robson Goodness-of-Fit Test with Bayesian and Non-Bayesian Estimation Methods","authors":"M. Ibrahim, M. M. Ali, H. Goual, H. Yousof","doi":"10.18187/pjsor.v18i4.3600","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.3600","url":null,"abstract":"After studying the mathematical properties of the Double Burr XII model, we present Bayesian and non-Bayesian estimation for its unknown parameters. Also, we constructed a new statistical test for goodness-of-fit in case of complete and censored samples. The modified test is developed based on the Nikulin-Rao-Robson statistic for validation. Simulations are performed for assessing the new test along with nine applications on real data.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48254989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.18187/pjsor.v18i4.4095
E. Martinez, Diego Gafuri Silva, Larissa Intrebartoli Resende, Elisângela Aparecida da Silva Lizzi, J. Achcar
Spatial analysis techniques are used in the data analysis of ecological studies, which consider geographical areas as observation units. In this article, we propose a Bayesian bivariate spatial shared component model to mapping the breast and cervical cancer mortality in Southern Brazil, based on the models introduced by Knorr-Held and Best (2001) and Held et al. (2005). Markov Chain Monte Carlo (MCMC) methods were used to spatially smooth the standardized mortality ratios (SMR) for both diseases. Local Indicator of Spatial Association (LISA) was used to verify the existence of spatial clusters in specific geographical areas. This study was carried out using secondary data obtained from publicly available health information systems.
空间分析技术用于生态研究的数据分析,将地理区域视为观测单位。在这篇文章中,我们基于Knorr-Held和Best(2001)以及Held等人(2005)引入的模型,提出了一个贝叶斯双变量空间共享分量模型来绘制巴西南部乳腺癌和宫颈癌癌症死亡率。Markov Chain Monte Carlo(MCMC)方法用于对两种疾病的标准化死亡率(SMR)进行空间平滑。空间关联局部指标(LISA)用于验证特定地理区域中空间集群的存在。这项研究是利用从公共卫生信息系统获得的二次数据进行的。
{"title":"Bayesian bivariate spatial shared component model: mapping breast and cervical cancer mortality in Southern Brazil","authors":"E. Martinez, Diego Gafuri Silva, Larissa Intrebartoli Resende, Elisângela Aparecida da Silva Lizzi, J. Achcar","doi":"10.18187/pjsor.v18i4.4095","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4095","url":null,"abstract":"Spatial analysis techniques are used in the data analysis of ecological studies, which consider geographical areas as observation units. In this article, we propose a Bayesian bivariate spatial shared component model to mapping the breast and cervical cancer mortality in Southern Brazil, based on the models introduced by Knorr-Held and Best (2001) and Held et al. (2005). Markov Chain Monte Carlo (MCMC) methods were used to spatially smooth the standardized mortality ratios (SMR) for both diseases. Local Indicator of Spatial Association (LISA) was used to verify the existence of spatial clusters in specific geographical areas. This study was carried out using secondary data obtained from publicly available health information systems.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42171702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.18187/pjsor.v18i4.4149
L. Sapkota, Vijay Kumar
This paper proposes the 4-parameter odd Lomax generalized exponential distribution for the study of engineering and COVID-19 data. The statistical and mathematical properties of this distribution such as a linear representation of the probability density function, survival function, hazard rate function, moments, quantile function, order statistics, entropy, mean deviation, characteristic function, and average residual life function are established. The estimates of parameters of the proposed distribution are obtained using maximum likelihood estimation (MLE), Maximum product spacings (MPS), least-square estimation (LSE), and Cramer-Von-Mises estimation (CVME) methods. A Monte-Carlo simulation experiment is carried out to study the MLEs. The applicability of the proposed distribution is evaluated using two real datasets related to engineering and COVID-19. All the computational work was performed in R programming software.
{"title":"Odd Lomax Generalized Exponential Distribution: Application to Engineering and COVID-19 data","authors":"L. Sapkota, Vijay Kumar","doi":"10.18187/pjsor.v18i4.4149","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4149","url":null,"abstract":"This paper proposes the 4-parameter odd Lomax generalized exponential distribution for the study of engineering and COVID-19 data. The statistical and mathematical properties of this distribution such as a linear representation of the probability density function, survival function, hazard rate function, moments, quantile function, order statistics, entropy, mean deviation, characteristic function, and average residual life function are established. The estimates of parameters of the proposed distribution are obtained using maximum likelihood estimation (MLE), Maximum product spacings (MPS), least-square estimation (LSE), and Cramer-Von-Mises estimation (CVME) methods. A Monte-Carlo simulation experiment is carried out to study the MLEs. The applicability of the proposed distribution is evaluated using two real datasets related to engineering and COVID-19. All the computational work was performed in R programming software.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46058411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.18187/pjsor.v18i4.4048
G. Hamedani, A. Roshani
As we mentioned in our previous works, sometimes in real life cases, it is very difficult to obtain samples from a continuous distribution. The observed values are generally discrete due to the fact that they are not measured in continuum. In some cases, it may be possible to measure the observations via a continuous scale, however, they may be recorded in a manner in which a discrete model seems more suitable. Consequently, the discrete models are appearing quite frequently in applied fields and have attracted the attention of many researchers. Characterizations of distributions are important to many researchers in the applied fields. An investigator will be vitally interested to know if their model fits the requirements of a particular distribution. To this end, one will depend on the characterizations of this distribution which provide conditions under which the underlying distribution is indeed that particular distribution. Here, we present certain characterizations of 14 recently introduced discrete distributions.
{"title":"Characterizations of Fourteen (2021-2022) Proposed Discrete Distributions","authors":"G. Hamedani, A. Roshani","doi":"10.18187/pjsor.v18i4.4048","DOIUrl":"https://doi.org/10.18187/pjsor.v18i4.4048","url":null,"abstract":"As we mentioned in our previous works, sometimes in real life cases, it is very difficult to obtain samples from a continuous distribution. The observed values are generally discrete due to the fact that they are not measured in continuum. In some cases, it may be possible to measure the observations via a continuous scale, however, they may be recorded in a manner in which a discrete model seems more suitable. Consequently, the discrete models are appearing quite frequently in applied fields and have attracted the attention of many researchers. \u0000Characterizations of distributions are important to many researchers in the applied fields. An investigator will be vitally interested to know if their model fits the requirements of a particular distribution. To this end, one will depend on the characterizations of this distribution which provide conditions under which the underlying distribution is indeed that particular distribution. Here, we present certain characterizations of 14 recently introduced discrete distributions.","PeriodicalId":19973,"journal":{"name":"Pakistan Journal of Statistics and Operation Research","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48665676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}