Pub Date : 2022-04-24DOI: 10.1080/01966324.2022.2062270
Sally M. Borham, A. Tharwat, E. Hassan
Abstract A key purpose of a service network design is to determine the best possible location(s) for each service center. The facility services must provide a quick and easy response to callers within a reasonable distance, especially in urgent cases. It’s well known that this problem is NP-hard, non-convex and non-differentiable optimization problem. However, if we make some simplifying assumptions, the problem could be solved within a polynomial time. This article introduces an algorithm for solving the problem of determining the best possible locations of the emergency service centers, concerning the case in which these centers are located on simple closed curves (e.g., ring roads). The proposed model can be applied in designing the emergency centers on ring roads in new cities, which have become one of the most important designs in solving traffic congestion problems. The problem is mathematically formulated, an algorithm for solving the problem under simplifying assumptions is proposed, the mathematics behind the algorithm is given, and the algorithm is illustrated by a numerical example.
{"title":"Emergency Service Location Problem with Ring Roads","authors":"Sally M. Borham, A. Tharwat, E. Hassan","doi":"10.1080/01966324.2022.2062270","DOIUrl":"https://doi.org/10.1080/01966324.2022.2062270","url":null,"abstract":"Abstract A key purpose of a service network design is to determine the best possible location(s) for each service center. The facility services must provide a quick and easy response to callers within a reasonable distance, especially in urgent cases. It’s well known that this problem is NP-hard, non-convex and non-differentiable optimization problem. However, if we make some simplifying assumptions, the problem could be solved within a polynomial time. This article introduces an algorithm for solving the problem of determining the best possible locations of the emergency service centers, concerning the case in which these centers are located on simple closed curves (e.g., ring roads). The proposed model can be applied in designing the emergency centers on ring roads in new cities, which have become one of the most important designs in solving traffic congestion problems. The problem is mathematically formulated, an algorithm for solving the problem under simplifying assumptions is proposed, the mathematics behind the algorithm is given, and the algorithm is illustrated by a numerical example.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"373 - 386"},"PeriodicalIF":0.0,"publicationDate":"2022-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45806888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-21DOI: 10.1080/01966324.2022.2037030
Ankita Gupta, Rakesh Ranjan, S. Upadhyay
Abstract The paper considers constant stress accelerated life test situations under a competing risk scenario. The different groups of experimental units are operated at different accelerated levels of stress and, at each level, the units are exposed to fail from two competing causes of failures. For modeling the failure times resulting from such a test, the paper considers two competing risk models. The first model is based on the minimum of two Weibull failure times whereas the second one is based on the minimum of two lognormal failure times. In order to study the effect of covariates on failure times, the scale parameter of component models in each modeling framework has been regressed using the Arrhenius relationship. The paper performs a complete Bayes analysis of both the considered models for a real dataset arising from a temperature accelerated life test experiment and compares the two models using a few standard Bayesian tools. Bayes analysis is done using vague but proper priors for the parameters. Moreover, the considered models result in to intractable posterior distributions and, therefore, the paper uses the Metropolis algorithm to draw the desired posterior based inferences. For censored data situations, however, the intermediate Gibbs steps are used as updating mechanism by defining full conditionals corresponding to unknown censored data. The plausibility of both the models for entertained dataset has also been checked before performing their comparison. A numerical example based on a real dataset is provided for illustration.
{"title":"A Bayes Analysis and Comparison of Arrhenius Weibull and Arrhenius Lognormal Models under Competing Risk","authors":"Ankita Gupta, Rakesh Ranjan, S. Upadhyay","doi":"10.1080/01966324.2022.2037030","DOIUrl":"https://doi.org/10.1080/01966324.2022.2037030","url":null,"abstract":"Abstract The paper considers constant stress accelerated life test situations under a competing risk scenario. The different groups of experimental units are operated at different accelerated levels of stress and, at each level, the units are exposed to fail from two competing causes of failures. For modeling the failure times resulting from such a test, the paper considers two competing risk models. The first model is based on the minimum of two Weibull failure times whereas the second one is based on the minimum of two lognormal failure times. In order to study the effect of covariates on failure times, the scale parameter of component models in each modeling framework has been regressed using the Arrhenius relationship. The paper performs a complete Bayes analysis of both the considered models for a real dataset arising from a temperature accelerated life test experiment and compares the two models using a few standard Bayesian tools. Bayes analysis is done using vague but proper priors for the parameters. Moreover, the considered models result in to intractable posterior distributions and, therefore, the paper uses the Metropolis algorithm to draw the desired posterior based inferences. For censored data situations, however, the intermediate Gibbs steps are used as updating mechanism by defining full conditionals corresponding to unknown censored data. The plausibility of both the models for entertained dataset has also been checked before performing their comparison. A numerical example based on a real dataset is provided for illustration.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"42 1","pages":"105 - 125"},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59260152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-10DOI: 10.1080/01966324.2022.2032500
Mustafa Nadar, Elif Erçelik
Abstract This paper deals with a system consisting of k identical strength components where each side of a given component is composed of a pair of dependent elements. These elements have bivariate generalized exponential distribution and each element is put through a common random stress T which has generalized exponential distribution. The system is considered as working only if at least s out of strength random variables overcome the random stress. The multicomponent reliability of the system is defined by at least s of the exceed where and for Estimation of the multicomponent reliability may help the safety management and prevent some catastrophic disaster. We estimate multicomponent reliability by using classical and Bayesian approaches. Since the explicit form of stress-strength reliability estimate is not accessible, Lindley’s approximation and the Markov Chain Monte Carlo (MCMC) methods are used to develop Bayes estimate of Further, numerical studies are conducted and the reliability estimators are compared through the estimated risks (ER).
{"title":"Reliability of Multicomponent Stress-Strength Model Based on Bivariate Generalized Exponential Distribution","authors":"Mustafa Nadar, Elif Erçelik","doi":"10.1080/01966324.2022.2032500","DOIUrl":"https://doi.org/10.1080/01966324.2022.2032500","url":null,"abstract":"Abstract This paper deals with a system consisting of k identical strength components where each side of a given component is composed of a pair of dependent elements. These elements have bivariate generalized exponential distribution and each element is put through a common random stress T which has generalized exponential distribution. The system is considered as working only if at least s out of strength random variables overcome the random stress. The multicomponent reliability of the system is defined by at least s of the exceed where and for Estimation of the multicomponent reliability may help the safety management and prevent some catastrophic disaster. We estimate multicomponent reliability by using classical and Bayesian approaches. Since the explicit form of stress-strength reliability estimate is not accessible, Lindley’s approximation and the Markov Chain Monte Carlo (MCMC) methods are used to develop Bayes estimate of Further, numerical studies are conducted and the reliability estimators are compared through the estimated risks (ER).","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"42 1","pages":"86 - 103"},"PeriodicalIF":0.0,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48948376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-07DOI: 10.1080/01966324.2021.2019148
S. Kayal, L. K. Patra, Raju Bhakta, S. Nadarajah
SYNOPTIC ABSTRACT The largest and the smallest order statistics respectively represent the lifetime of a parallel and a series system. Various stochastic orders such as the usual stochastic order, hazard rate order and the reversed hazard rate order are used to obtain a stochastically better system. In the present communication, we consider stochastic comparison of the largest and the smallest order statistics arising from heterogeneous log-logistic distributions. First, we treat the case when the components do not receive random shocks. In other case, we assume that the components receive random shocks. The comparisons are studied in terms of the dispersive, usual stochastic, hazard rate and the reversed hazard rate orders. Majorization-based sufficient conditions are obtained to compare the order statistics. In addition, to illustrate the results, several numerical examples are presented.
{"title":"Ordering Results for Order Statistics from Heterogeneous Log-Logistic Distributions","authors":"S. Kayal, L. K. Patra, Raju Bhakta, S. Nadarajah","doi":"10.1080/01966324.2021.2019148","DOIUrl":"https://doi.org/10.1080/01966324.2021.2019148","url":null,"abstract":"SYNOPTIC ABSTRACT The largest and the smallest order statistics respectively represent the lifetime of a parallel and a series system. Various stochastic orders such as the usual stochastic order, hazard rate order and the reversed hazard rate order are used to obtain a stochastically better system. In the present communication, we consider stochastic comparison of the largest and the smallest order statistics arising from heterogeneous log-logistic distributions. First, we treat the case when the components do not receive random shocks. In other case, we assume that the components receive random shocks. The comparisons are studied in terms of the dispersive, usual stochastic, hazard rate and the reversed hazard rate orders. Majorization-based sufficient conditions are obtained to compare the order statistics. In addition, to illustrate the results, several numerical examples are presented.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"42 1","pages":"51 - 68"},"PeriodicalIF":0.0,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48699756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-27DOI: 10.1080/01966324.2022.2027302
Swati Agarwal, Shaurav Sharma
Abstract In emergency situations, such as at the time of the outbreak of infectious viruses (COVID-19, SARS, Ebola, MERS, etc.), strike of natural disasters (Earthquakes, tsunamis, cyclones, etc.), wars, terrorist attacks, etc., where distributing essential goods and services in minimum possible time is a major logistical challenge, the concept of more-for-less paradox could be helpful. In a minimization type transportation problem, this paradoxical situation occurs when the value of objective function falls below the optimum value by shipping a large number of total goods. In this article, a unified algorithm is developed to identify and resolve the existence of paradoxical situation in the time minimization transportation problem with mixed constraints using right-hand side parametric formulation. Using this prior approach, the paradoxical solution (if exists) can be found first, followed by an optimal solution. If the paradoxical part does not exist, it gets neglected. The conditions governing the existence of more transportation flow in less shipping time enable the decision-maker to extend the optimal solution in search of more-for-less opportunity at the time of emergency. The validity of an algorithm has been tested through numerical illustrations and by computational observations on matlab.
{"title":"More-for-Less Paradox in Time Minimization Transportation Problem with Mixed Constraints","authors":"Swati Agarwal, Shaurav Sharma","doi":"10.1080/01966324.2022.2027302","DOIUrl":"https://doi.org/10.1080/01966324.2022.2027302","url":null,"abstract":"Abstract In emergency situations, such as at the time of the outbreak of infectious viruses (COVID-19, SARS, Ebola, MERS, etc.), strike of natural disasters (Earthquakes, tsunamis, cyclones, etc.), wars, terrorist attacks, etc., where distributing essential goods and services in minimum possible time is a major logistical challenge, the concept of more-for-less paradox could be helpful. In a minimization type transportation problem, this paradoxical situation occurs when the value of objective function falls below the optimum value by shipping a large number of total goods. In this article, a unified algorithm is developed to identify and resolve the existence of paradoxical situation in the time minimization transportation problem with mixed constraints using right-hand side parametric formulation. Using this prior approach, the paradoxical solution (if exists) can be found first, followed by an optimal solution. If the paradoxical part does not exist, it gets neglected. The conditions governing the existence of more transportation flow in less shipping time enable the decision-maker to extend the optimal solution in search of more-for-less opportunity at the time of emergency. The validity of an algorithm has been tested through numerical illustrations and by computational observations on matlab.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"42 1","pages":"69 - 85"},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43075502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-23DOI: 10.1080/01966324.2021.2007189
Jatesh Kumar, Vikram Singh Brahm, A. N. Gill
Abstract This article deals with the comparison between successive two-parameter exponential populations with respect to the location parameter when it is reasonable to assume that the populations are ordered in some natural way. It is well reported in the literature that if the interest of the experimenter is only in testing the significance of differences, the stepwise procedures are more powerful than the simultaneous confidence intervals procedures. This fact motivated us to extend the simultaneous confidence intervals procedure, for the differences between the location parameters of successive exponential populations, of Singh et al. (2006) to a stepwise procedure by proposing step-down tests for simultaneously testing the significance of the differences of the location parameters of successive two-parameter exponential populations. For a given type-I family-wise error rate (FWER), the critical constants are tabulated for the implementation of the proposed procedure by the practitioners. The advantage of the proposed procedure in comparison to the Singh et al. (2006) procedure is demonstrated by using a numerical example.
{"title":"Step-Down Procedure for Comparison between Successive Exponential Populations","authors":"Jatesh Kumar, Vikram Singh Brahm, A. N. Gill","doi":"10.1080/01966324.2021.2007189","DOIUrl":"https://doi.org/10.1080/01966324.2021.2007189","url":null,"abstract":"Abstract This article deals with the comparison between successive two-parameter exponential populations with respect to the location parameter when it is reasonable to assume that the populations are ordered in some natural way. It is well reported in the literature that if the interest of the experimenter is only in testing the significance of differences, the stepwise procedures are more powerful than the simultaneous confidence intervals procedures. This fact motivated us to extend the simultaneous confidence intervals procedure, for the differences between the location parameters of successive exponential populations, of Singh et al. (2006) to a stepwise procedure by proposing step-down tests for simultaneously testing the significance of the differences of the location parameters of successive two-parameter exponential populations. For a given type-I family-wise error rate (FWER), the critical constants are tabulated for the implementation of the proposed procedure by the practitioners. The advantage of the proposed procedure in comparison to the Singh et al. (2006) procedure is demonstrated by using a numerical example.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"362 - 372"},"PeriodicalIF":0.0,"publicationDate":"2021-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47016844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-23DOI: 10.1080/01966324.2021.2016522
N. Unnikrishnan Nair, B. Vineshkumar
Abstract Recently, Vineshkumar and Nair (2019) have discussed the applications of bivariate quantile functions in the context of reliability analysis. But, the general properties of bivariate quantile functions have not been considered in their work. In the present paper, we carry out a preliminary study on the properties of bivariate distributions represented through quantile functions. The uses of the new results are illustrated in the case of a bivariate quantile function model by deriving their properties and then applying them to real data.
{"title":"Properties of Bivariate Distributions Represented through Quantile Functions","authors":"N. Unnikrishnan Nair, B. Vineshkumar","doi":"10.1080/01966324.2021.2016522","DOIUrl":"https://doi.org/10.1080/01966324.2021.2016522","url":null,"abstract":"Abstract Recently, Vineshkumar and Nair (2019) have discussed the applications of bivariate quantile functions in the context of reliability analysis. But, the general properties of bivariate quantile functions have not been considered in their work. In the present paper, we carry out a preliminary study on the properties of bivariate distributions represented through quantile functions. The uses of the new results are illustrated in the case of a bivariate quantile function model by deriving their properties and then applying them to real data.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"42 1","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2021-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42504000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-15DOI: 10.1080/01966324.2021.1997676
Abedel-Qader Al-Masri
Abstract Combining p-values from independent statistical tests is a popular approach to meta-analysis, particularly when the data underlying the tests are either no longer available or are difficult to combine. For simple null hypotheses, given any non-parametric combination method which has a monotone increasing acceptance region, there exists a problem for which this method is most powerful against some alternative. Starting from this perspective and recasting each method of combining p-values as a likelihood ratio test, we present theoretical results for some of the standard combiners which provide guidance about how a powerful combiner might be chosen in practice. In this paper we consider the problem of combining independent tests as for testing a simple hypothesis in case of log-normal distribution. We study the six free-distribution combination test producers namely; Fisher, logistic, sum of p-values, inverse normal, Tippett’s method, and maximum of p-values. Moreover, we studying the behavior of these tests via the exact Bahadur slope. The limits of the ratios of every pair of these slopes are discussed as the parameter and As the maximum of p-values is better than all other methods, followed in decreasing order by the inverse normal, logistic, the sum of p-values, Fisher, and Tippett’s procedure. Whereas, the worst method the sum of p-values and the other methods remain the same, since they have the same limit. In the end, a numerical study to investigate these comparisons behavior in different values of It will be shown that the inverse normal method is the best method followed by the logistic method, the Fisher method and the sum of p-values method.
{"title":"On Combining Independent Tests in Case of Log-Normal Distribution","authors":"Abedel-Qader Al-Masri","doi":"10.1080/01966324.2021.1997676","DOIUrl":"https://doi.org/10.1080/01966324.2021.1997676","url":null,"abstract":"Abstract Combining p-values from independent statistical tests is a popular approach to meta-analysis, particularly when the data underlying the tests are either no longer available or are difficult to combine. For simple null hypotheses, given any non-parametric combination method which has a monotone increasing acceptance region, there exists a problem for which this method is most powerful against some alternative. Starting from this perspective and recasting each method of combining p-values as a likelihood ratio test, we present theoretical results for some of the standard combiners which provide guidance about how a powerful combiner might be chosen in practice. In this paper we consider the problem of combining independent tests as for testing a simple hypothesis in case of log-normal distribution. We study the six free-distribution combination test producers namely; Fisher, logistic, sum of p-values, inverse normal, Tippett’s method, and maximum of p-values. Moreover, we studying the behavior of these tests via the exact Bahadur slope. The limits of the ratios of every pair of these slopes are discussed as the parameter and As the maximum of p-values is better than all other methods, followed in decreasing order by the inverse normal, logistic, the sum of p-values, Fisher, and Tippett’s procedure. Whereas, the worst method the sum of p-values and the other methods remain the same, since they have the same limit. In the end, a numerical study to investigate these comparisons behavior in different values of It will be shown that the inverse normal method is the best method followed by the logistic method, the Fisher method and the sum of p-values method.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"350 - 361"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44914723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-28DOI: 10.1080/01966324.2021.1963357
S. Dey, Liang Wang
Abstract In this article, various classical estimation methods are employed to estimate the parameters of unit Burr III distribution. Further, the associated second-order bias corrections of the MLEs of its parameters are obtained by using a modified bias-corrected approach. In addition, another parametric bootstrap bias correction method is also considered for model parameters. Extensive Monte-Carlo simulation studies are performed to evaluate different estimation methods in terms of their average biases and mean squared error, and the performance of these estimators are compared as well. Our results reveal that the bias corrections improve the accuracy of maximum likelihood estimates. Finally, one real data example is discussed to illustrate the applicability of the unit Burr III distribution.
{"title":"Methods of Estimation and Bias Corrected Maximum Likelihood Estimators of Unit Burr III Distribution","authors":"S. Dey, Liang Wang","doi":"10.1080/01966324.2021.1963357","DOIUrl":"https://doi.org/10.1080/01966324.2021.1963357","url":null,"abstract":"Abstract In this article, various classical estimation methods are employed to estimate the parameters of unit Burr III distribution. Further, the associated second-order bias corrections of the MLEs of its parameters are obtained by using a modified bias-corrected approach. In addition, another parametric bootstrap bias correction method is also considered for model parameters. Extensive Monte-Carlo simulation studies are performed to evaluate different estimation methods in terms of their average biases and mean squared error, and the performance of these estimators are compared as well. Our results reveal that the bias corrections improve the accuracy of maximum likelihood estimates. Finally, one real data example is discussed to illustrate the applicability of the unit Burr III distribution.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"316 - 333"},"PeriodicalIF":0.0,"publicationDate":"2021-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42838445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27DOI: 10.1080/01966324.2021.1957729
Cenk Çalışkan
Abstract We study the Economic Order Quantity (EOQ) model for deteriorating items with planned backorders. In the exponentially deteriorating items model, the inventory deterioration rate is proportional to the inventory level, which leads to an exponentially decreasing inventory level over time, obtained by solving an ordinary differential equation. Due to polynomial and exponential terms in the total cost function, an exact closed form solution is not possible. Therefore, an approximation of the total cost function has to be used. In this paper, we propose a concise and intuitive method to determine the inventory level function without using differential equations, and a method to determine the optimal solution without derivatives, based on an accurate approximation of the total cost function. Our approximation is novel and intuitive and numerical experiments demonstrate the accuracy of the closed form solution based on our approximation.
{"title":"EOQ Model for Exponentially Deteriorating Items with Planned Backorders without Differential Calculus","authors":"Cenk Çalışkan","doi":"10.1080/01966324.2021.1957729","DOIUrl":"https://doi.org/10.1080/01966324.2021.1957729","url":null,"abstract":"Abstract We study the Economic Order Quantity (EOQ) model for deteriorating items with planned backorders. In the exponentially deteriorating items model, the inventory deterioration rate is proportional to the inventory level, which leads to an exponentially decreasing inventory level over time, obtained by solving an ordinary differential equation. Due to polynomial and exponential terms in the total cost function, an exact closed form solution is not possible. Therefore, an approximation of the total cost function has to be used. In this paper, we propose a concise and intuitive method to determine the inventory level function without using differential equations, and a method to determine the optimal solution without derivatives, based on an accurate approximation of the total cost function. Our approximation is novel and intuitive and numerical experiments demonstrate the accuracy of the closed form solution based on our approximation.","PeriodicalId":35850,"journal":{"name":"American Journal of Mathematical and Management Sciences","volume":"41 1","pages":"223 - 243"},"PeriodicalIF":0.0,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48025693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}