Pub Date : 2020-12-11DOI: 10.1080/01966324.2020.1854138
Elif Erçelik, Mustafa Nadar
Abstract In this work, a new kernel estimator based on scaled inverse chi-squared distribution is proposed to estimate densities having nonnegative support. The optimal rates of convergence for the mean squared error (MSE) and the mean integrated squared error (MISE) are obtained. Adaptive Bayesian bandwidth selection method with Lindley approximation is used for heavy tailed distributions. Simulation studies are performed to compare the performance of the average integrated square error (ISE) by using the bandwidths obtained from the global least squares cross-validation bandwidth selection method and the bandwidths obtained from adaptive Bayesian method with Lindley approximation. Finally, real data sets are presented to illustrate the findings.
{"title":"A New Kernel Estimator Based on Scaled Inverse Chi-Squared Density Function","authors":"Elif Erçelik, Mustafa Nadar","doi":"10.1080/01966324.2020.1854138","DOIUrl":"https://doi.org/10.1080/01966324.2020.1854138","url":null,"abstract":"Abstract In this work, a new kernel estimator based on scaled inverse chi-squared distribution is proposed to estimate densities having nonnegative support. The optimal rates of convergence for the mean squared error (MSE) and the mean integrated squared error (MISE) are obtained. Adaptive Bayesian bandwidth selection method with Lindley approximation is used for heavy tailed distributions. Simulation studies are performed to compare the performance of the average integrated square error (ISE) by using the bandwidths obtained from the global least squares cross-validation bandwidth selection method and the bandwidths obtained from adaptive Bayesian method with Lindley approximation. Finally, real data sets are presented to illustrate the findings.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"306 - 319"},"PeriodicalIF":0.0,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1854138","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47614798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1080/01966324.2020.1848667
Ali I. Genç
Abstract Although computer simulations can be used in computations of various characteristics of a stochastic problem to get an approximate answer, we frequently require exact results. When the uncertainty of this problem is restricted within some bounded domain, a triangular distribution may be used appropriately for modeling. In this work, we consider two triangularly and independently distributed random variables. We derive the exact distribution of the ratio of the maximum of these random variables to their minimum. This ratio of extreme statistics may be used as a dispersion measure. We present the quotient distribution in a computable form. Two possible applications are also given.
{"title":"On the Quotient of Extreme Order Statistics from Two Triangularly Distributed Random Variables","authors":"Ali I. Genç","doi":"10.1080/01966324.2020.1848667","DOIUrl":"https://doi.org/10.1080/01966324.2020.1848667","url":null,"abstract":"Abstract Although computer simulations can be used in computations of various characteristics of a stochastic problem to get an approximate answer, we frequently require exact results. When the uncertainty of this problem is restricted within some bounded domain, a triangular distribution may be used appropriately for modeling. In this work, we consider two triangularly and independently distributed random variables. We derive the exact distribution of the ratio of the maximum of these random variables to their minimum. This ratio of extreme statistics may be used as a dispersion measure. We present the quotient distribution in a computable form. Two possible applications are also given.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"289 - 305"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1848667","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48964001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1080/01966324.2020.1847224
Cenk Çalışkan
Abstract The classical Economic Order Quantity (EOQ) model assumes simple interest to represent the opportunity cost of capital tied up in the inventory. Recently, the classical model has been extended to incorporate compounding, and an intuitive closed-form solution has been proposed. The compounding based model is more realistic than the original EOQ model because compound interest is the standard practice in finance and banking. The resulting closed-form solution proposed for this recent model is based on an approximation of the annual compound interest-based opportunity cost. However, the derivation of the approximation model is rather long and complicated, involving the use of the L’Hôpital’s rule several times. Here, we show an easier way to approximate the annual compound interest-based opportunity cost. Our derivation is shorter and it does not require the use of the L’Hôpital’s rule. We also demonstrate that the approximation is remarkably close to the exact model, and it results in the same intuitive closed-form solution as the earlier one.
{"title":"On the Economic Order Quantity Model with Compounding","authors":"Cenk Çalışkan","doi":"10.1080/01966324.2020.1847224","DOIUrl":"https://doi.org/10.1080/01966324.2020.1847224","url":null,"abstract":"Abstract The classical Economic Order Quantity (EOQ) model assumes simple interest to represent the opportunity cost of capital tied up in the inventory. Recently, the classical model has been extended to incorporate compounding, and an intuitive closed-form solution has been proposed. The compounding based model is more realistic than the original EOQ model because compound interest is the standard practice in finance and banking. The resulting closed-form solution proposed for this recent model is based on an approximation of the annual compound interest-based opportunity cost. However, the derivation of the approximation model is rather long and complicated, involving the use of the L’Hôpital’s rule several times. Here, we show an easier way to approximate the annual compound interest-based opportunity cost. Our derivation is shorter and it does not require the use of the L’Hôpital’s rule. We also demonstrate that the approximation is remarkably close to the exact model, and it results in the same intuitive closed-form solution as the earlier one.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"283 - 288"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1847224","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48043929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-13DOI: 10.1080/01966324.2020.1835589
V. Deepthi, Joby K. Jose
Abstract This article describes Bayes estimation of various queue characteristics such as queue parameters λ and μ, and queue performance measures like traffic intensity, expected waiting time in the queue, and expected queue size of model using Mckay’s bivariate gamma distribution as prior under squared error loss function as well as entropy loss function. Closed form expressions are obtained for the Bayes estimators of the queue parameters and various queue performance measures using the properties of confluent hyper geometric function and Gauss hyper geometric function. Bootstrap Bayes estimates and credible regions are computed using simulated data for different set of hyper parameter values. Also we apply Markov Chain Monte Carlo method and compute Bayes estimates and credible intervals of various queue characteristics using the same joint prior distribution and compare the values with bootstrap estimates.
{"title":"Bayesian Estimation of Queueing Model using Bivariate Prior","authors":"V. Deepthi, Joby K. Jose","doi":"10.1080/01966324.2020.1835589","DOIUrl":"https://doi.org/10.1080/01966324.2020.1835589","url":null,"abstract":"Abstract This article describes Bayes estimation of various queue characteristics such as queue parameters λ and μ, and queue performance measures like traffic intensity, expected waiting time in the queue, and expected queue size of model using Mckay’s bivariate gamma distribution as prior under squared error loss function as well as entropy loss function. Closed form expressions are obtained for the Bayes estimators of the queue parameters and various queue performance measures using the properties of confluent hyper geometric function and Gauss hyper geometric function. Bootstrap Bayes estimates and credible regions are computed using simulated data for different set of hyper parameter values. Also we apply Markov Chain Monte Carlo method and compute Bayes estimates and credible intervals of various queue characteristics using the same joint prior distribution and compare the values with bootstrap estimates.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"88 - 105"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1835589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45434349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-13DOI: 10.1080/01966324.2020.1842271
A. Bouchentouf, M. Cherfaoui, Mohamed Boualem
Abstract This paper deals with a finite capacity multi-server Markovian queueing model with Bernoulli feedback, synchronous multiple vacation policy and customers’ impatience (balking and reneging). By employing certain customer retention mechanism, impatient customers can be retained in the system. Applications of the suggested queueing model can be found in a wide variety of practical systems including modern information and communication technology (ICT) networks, call centers, and manufacturing systems. Using the recursive method, the steady state probabilities of the model are obtained. Various performance measures are derived. Then, some important particular cases are provided. Finally, different numerical examples are presented to demonstrate how the different parameters of the model influence the behavior of the stationary characteristics of the system.
{"title":"Analysis and Performance Evaluation of Markovian Feedback Multi-Server Queueing Model with Vacation and Impatience","authors":"A. Bouchentouf, M. Cherfaoui, Mohamed Boualem","doi":"10.1080/01966324.2020.1842271","DOIUrl":"https://doi.org/10.1080/01966324.2020.1842271","url":null,"abstract":"Abstract This paper deals with a finite capacity multi-server Markovian queueing model with Bernoulli feedback, synchronous multiple vacation policy and customers’ impatience (balking and reneging). By employing certain customer retention mechanism, impatient customers can be retained in the system. Applications of the suggested queueing model can be found in a wide variety of practical systems including modern information and communication technology (ICT) networks, call centers, and manufacturing systems. Using the recursive method, the steady state probabilities of the model are obtained. Various performance measures are derived. Then, some important particular cases are provided. Finally, different numerical examples are presented to demonstrate how the different parameters of the model influence the behavior of the stationary characteristics of the system.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"261 - 282"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1842271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43302423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-09DOI: 10.1080/01966324.2020.1835590
M. H. Abu-Moussa, M. El-din, M. A. Mosilhy
Abstract In this article, we combine the adaptive progressive Type-II censoring model with the general progressive model, to obtain the estimates for the parameters of Gompertz distribution, and the Bayesian prediction intervals. Estimation is executed using the maximum likelihood method (MLE) and the Bayesian method. Bayesian estimates are constructed depending on four types of loss functions. The credible intervals and the asymptotic confidence intervals are determined for the parameters of Gompertz distribution based on the Bayesian estimates and the MLEs, respectively. Finally, a real data example and the simulation study are discussed to compare the proposed methods.
{"title":"Statistical Inference for Gompertz Distribution Using the Adaptive-General Progressive Type-II Censored Samples","authors":"M. H. Abu-Moussa, M. El-din, M. A. Mosilhy","doi":"10.1080/01966324.2020.1835590","DOIUrl":"https://doi.org/10.1080/01966324.2020.1835590","url":null,"abstract":"Abstract In this article, we combine the adaptive progressive Type-II censoring model with the general progressive model, to obtain the estimates for the parameters of Gompertz distribution, and the Bayesian prediction intervals. Estimation is executed using the maximum likelihood method (MLE) and the Bayesian method. Bayesian estimates are constructed depending on four types of loss functions. The credible intervals and the asymptotic confidence intervals are determined for the parameters of Gompertz distribution based on the Bayesian estimates and the MLEs, respectively. Finally, a real data example and the simulation study are discussed to compare the proposed methods.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"189 - 211"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1835590","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46293461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-04DOI: 10.1080/01966324.2020.1839609
G. Bansal, Adarsh Anand, Mohini Agarwal
Abstract In order to maintain an ecological balance between new demands and reducing waste, a “remanufacturing” process has emerged as a tool that collectively gratifies long-term benefits to both the firms as well as the consumers. The role of remanufacturing is important in new product development as it fastens the rate of system degradation. In view of this, the current study presents a methodical approach to estimate and analyze how the concept of remanufacturing helps in minimizing cost while satisfying demand. To determine the optimal time point of maximum profit and minimum cost as the overall objective, multi-attribute utility theory (MAUT) have been utilized. The cost of remanufacturing the product and its demand has been considered as significant components which affect the optimal point. Furthermore, the proposed model has been validated on real-life sales data of automobile industry.
{"title":"Modeling the Impact of Remanufacturing Process in Determining Demand-Cost Trade off Using MAUT","authors":"G. Bansal, Adarsh Anand, Mohini Agarwal","doi":"10.1080/01966324.2020.1839609","DOIUrl":"https://doi.org/10.1080/01966324.2020.1839609","url":null,"abstract":"Abstract In order to maintain an ecological balance between new demands and reducing waste, a “remanufacturing” process has emerged as a tool that collectively gratifies long-term benefits to both the firms as well as the consumers. The role of remanufacturing is important in new product development as it fastens the rate of system degradation. In view of this, the current study presents a methodical approach to estimate and analyze how the concept of remanufacturing helps in minimizing cost while satisfying demand. To determine the optimal time point of maximum profit and minimum cost as the overall objective, multi-attribute utility theory (MAUT) have been utilized. The cost of remanufacturing the product and its demand has been considered as significant components which affect the optimal point. Furthermore, the proposed model has been validated on real-life sales data of automobile industry.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"120 - 133"},"PeriodicalIF":0.0,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1839609","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47885238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-03DOI: 10.1080/01966324.2020.1835591
Bao-Anh Dang, K. Krishnamoorthy, Shanshan Lv
Abstract Capture-recapture is a popular sampling method to estimate the total number of individuals in a population. This method is also used to estimate the size of a target population based on several incomplete records/databases of individuals. In this context, a simple approximate confidence interval (CI) based on the hypergeometric distribution is proposed. The proposed CI is compared with a popular approximate CI, likelihood CI and an exact admissible CI in terms of coverage probability and precision. Our numerical study indicates that the proposed CI is very satisfactory in terms of coverage probability, better than the popular approximate CI, and much shorter than the admissible CI. The interval estimation method is illustrated using a few examples with epidemiological data.
{"title":"Confidence Intervals for a Population Size Based on Capture-Recapture Data","authors":"Bao-Anh Dang, K. Krishnamoorthy, Shanshan Lv","doi":"10.1080/01966324.2020.1835591","DOIUrl":"https://doi.org/10.1080/01966324.2020.1835591","url":null,"abstract":"Abstract Capture-recapture is a popular sampling method to estimate the total number of individuals in a population. This method is also used to estimate the size of a target population based on several incomplete records/databases of individuals. In this context, a simple approximate confidence interval (CI) based on the hypergeometric distribution is proposed. The proposed CI is compared with a popular approximate CI, likelihood CI and an exact admissible CI in terms of coverage probability and precision. Our numerical study indicates that the proposed CI is very satisfactory in terms of coverage probability, better than the popular approximate CI, and much shorter than the admissible CI. The interval estimation method is illustrated using a few examples with epidemiological data.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"212 - 224"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1835591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46354984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1080/01966324.2020.1837042
Tharshanna Nadarajah, A. Variyath, J. Loredo-Osti
Abstract Longitudinal data with a large number of covariates have become common in many applications such as epidemiology, clinical research, and therapeutic evaluation. The identification of a sub-model that adequately represents the data are necessary for easy interpretation. Existing information theoretic-approaches such as AIC and BIC are useful, but computationally not efficient due to an evaluation of all possible subsets. A new class of penalized likelihood methods such as LASSO, SCAD, etc. are efficient in these situations. All these methods rely on the parametric modeling of the response of interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. In such a situation, we propose penalized empirical likelihood (PEL) based on generalized estimating equations (GEE) by which the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties and present an efficient computational algorithm for optimizing PEL. Simulation studies show that when model assumptions are true, its performance is comparable to that of the existing methods and when the model is misspecified, our method has clear advantages over the existing methods. We have applied the method to two case examples.
{"title":"Penalized Empirical Likelihood-Based Variable Selection for Longitudinal Data Analysis","authors":"Tharshanna Nadarajah, A. Variyath, J. Loredo-Osti","doi":"10.1080/01966324.2020.1837042","DOIUrl":"https://doi.org/10.1080/01966324.2020.1837042","url":null,"abstract":"Abstract Longitudinal data with a large number of covariates have become common in many applications such as epidemiology, clinical research, and therapeutic evaluation. The identification of a sub-model that adequately represents the data are necessary for easy interpretation. Existing information theoretic-approaches such as AIC and BIC are useful, but computationally not efficient due to an evaluation of all possible subsets. A new class of penalized likelihood methods such as LASSO, SCAD, etc. are efficient in these situations. All these methods rely on the parametric modeling of the response of interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. In such a situation, we propose penalized empirical likelihood (PEL) based on generalized estimating equations (GEE) by which the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties and present an efficient computational algorithm for optimizing PEL. Simulation studies show that when model assumptions are true, its performance is comparable to that of the existing methods and when the model is misspecified, our method has clear advantages over the existing methods. We have applied the method to two case examples.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"241 - 260"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1837042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49448516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-26DOI: 10.1080/01966324.2020.1833794
N. Nagamani, M. Tripathy, Somesh Kumar
Abstract Estimation under equality restrictions is an age old problem and has been considered by several researchers in the past due to practical applications and theoretical challenges involved in it. Particularly, the problem has been extensively studied from classical as well as decision theoretic point of view when the underlying distribution is normal. In this paper, we consider the problem when the underlying distribution is non-normal, say, logistic. Specifically, estimation of the common scale parameter of two logistic populations has been considered when the location parameters are unknown. It is observed that closed forms of the maximum likelihood estimators (MLEs) for the associated parameters do not exist. Using certain numerical techniques the MLEs have been derived. The asymptotic confidence intervals have been derived numerically too, as these also depend on the MLEs. Approximate Bayes estimators are proposed using non-informative as well as conjugate priors with respect to the squared error (SE) and the LINEX loss functions. A simulation study has been conducted to evaluate the proposed estimators and compare their performances through mean squared error (MSE) and bias. Finally, two real life examples have been considered in order to show the potential applications of the proposed model and illustrate the method of estimation.
{"title":"Estimating Common Scale Parameter of Two Logistic Populations: A Bayesian Study","authors":"N. Nagamani, M. Tripathy, Somesh Kumar","doi":"10.1080/01966324.2020.1833794","DOIUrl":"https://doi.org/10.1080/01966324.2020.1833794","url":null,"abstract":"Abstract Estimation under equality restrictions is an age old problem and has been considered by several researchers in the past due to practical applications and theoretical challenges involved in it. Particularly, the problem has been extensively studied from classical as well as decision theoretic point of view when the underlying distribution is normal. In this paper, we consider the problem when the underlying distribution is non-normal, say, logistic. Specifically, estimation of the common scale parameter of two logistic populations has been considered when the location parameters are unknown. It is observed that closed forms of the maximum likelihood estimators (MLEs) for the associated parameters do not exist. Using certain numerical techniques the MLEs have been derived. The asymptotic confidence intervals have been derived numerically too, as these also depend on the MLEs. Approximate Bayes estimators are proposed using non-informative as well as conjugate priors with respect to the squared error (SE) and the LINEX loss functions. A simulation study has been conducted to evaluate the proposed estimators and compare their performances through mean squared error (MSE) and bias. Finally, two real life examples have been considered in order to show the potential applications of the proposed model and illustrate the method of estimation.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"40 1","pages":"44 - 67"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01966324.2020.1833794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45354656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}