Characteristics of supply performance at the top echelon of an optimally managed multiechelon supply system are investigated; insights are developed which are useful in devising coordinated single-echelon policies which can approximate the benefits derived from multiechelon management.
{"title":"Decentralized stockage policies in a multiechelon environment","authors":"S. Frazza, A. Kaplan","doi":"10.1002/NAV.3800330203","DOIUrl":"https://doi.org/10.1002/NAV.3800330203","url":null,"abstract":"Characteristics of supply performance at the top echelon of an optimally managed multiechelon supply system are investigated; insights are developed which are useful in devising coordinated single-echelon policies which can approximate the benefits derived from multiechelon management.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115083290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article examines the problem of optimally selecting from several unknown rewards when there are given alternative, costly sources of information. The optimal rule, indicating the information to be purchased and the reward to be selected, is specified as a function of the decision maker's prior probabilities regarding the value of each alternative. The rule is surprisingly complex, balancing prior beliefs, the “informativeness” of the relevant information system, and the cost of acquiring information.
{"title":"Optimal selection with alternative information","authors":"G. Monahan","doi":"10.1002/NAV.3800330211","DOIUrl":"https://doi.org/10.1002/NAV.3800330211","url":null,"abstract":"This article examines the problem of optimally selecting from several unknown rewards when there are given alternative, costly sources of information. The optimal rule, indicating the information to be purchased and the reward to be selected, is specified as a function of the decision maker's prior probabilities regarding the value of each alternative. The rule is surprisingly complex, balancing prior beliefs, the “informativeness” of the relevant information system, and the cost of acquiring information.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125739275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concept of maximum entropy has been applied to specify the probabilistic model of consumer purchase behavior. This article is concerned with the marketing structure analysis based on entropy model when a new brand has pushed into the existing two-brand market. A comparison between the proposed model and the initial three-brand model is attempted based on their marketing structures. An optimal price decision maximizing the sales is also discussed.
{"title":"An entropy model for marketing structure analysis and price decision of new brand","authors":"I. Arizono, H. Ohta","doi":"10.1002/NAV.3800330208","DOIUrl":"https://doi.org/10.1002/NAV.3800330208","url":null,"abstract":"The concept of maximum entropy has been applied to specify the probabilistic model of consumer purchase behavior. This article is concerned with the marketing structure analysis based on entropy model when a new brand has pushed into the existing two-brand market. A comparison between the proposed model and the initial three-brand model is attempted based on their marketing structures. An optimal price decision maximizing the sales is also discussed.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121234084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A probabilistic model is developed that applies to military bombardment, advertising for a mass audience, and other kinds of situations in which striking a target means that less of it is left to strike. The model provides the basis for decision analysis based on marginal gain in such circumstances. Heterogneous resources are considered as well as composite targets. All expenditures are quantized. The model has been developed as part of a computer-based military expert system, to replace a large complex set of expert opinions. In that application it sharply improves efficiency, yet conforms to major tenets of tactical doctrine.
{"title":"Heterogeneous discrete expenditure for diminishing returns","authors":"H. Hamburger, J. Slagle","doi":"10.1002/NAV.3800330204","DOIUrl":"https://doi.org/10.1002/NAV.3800330204","url":null,"abstract":"A probabilistic model is developed that applies to military bombardment, advertising for a mass audience, and other kinds of situations in which striking a target means that less of it is left to strike. The model provides the basis for decision analysis based on marginal gain in such circumstances. Heterogneous resources are considered as well as composite targets. All expenditures are quantized. The model has been developed as part of a computer-based military expert system, to replace a large complex set of expert opinions. In that application it sharply improves efficiency, yet conforms to major tenets of tactical doctrine.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124516546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Procedures for solving multiple criteria problems are receiving increasing attention. Two major solution approaches are those involving prior articulation and progressive articulation of preference information. A progressive articulation (interactive) optimization approach, called the Paired Comparison Method (PCM) is compared to the prior articulation approach of a priori utility function measurement in a quality control decision environment from the perspective of the decision maker. The three major issues investigated included: (1) the ease of use of each method, (2) the preferences of solutions obtained, and (3) the insight provided by the methodology into the nature and structure of the problem. The problem setting involved management students who were rquired to determine an acceptance sampling plan using both methods. The PCM provided the most preferred solutions and was considered easier to use and understand. The prior articulation of preference method was found to give more insight into the problem structure. The results suggest that a hybrid approach, combining both prior preference assessment and an interactive procedure exploiting the advantages of each, should be employed to solve multiple criteria problems.
{"title":"Comparative evaluation of prior versus progressive articulation of preference in bicriterion optimization","authors":"G. Klein, H. Moskowitz, A. Ravindran","doi":"10.1002/NAV.3800330212","DOIUrl":"https://doi.org/10.1002/NAV.3800330212","url":null,"abstract":"Procedures for solving multiple criteria problems are receiving increasing attention. Two major solution approaches are those involving prior articulation and progressive articulation of preference information. A progressive articulation (interactive) optimization approach, called the Paired Comparison Method (PCM) is compared to the prior articulation approach of a priori utility function measurement in a quality control decision environment from the perspective of the decision maker. The three major issues investigated included: (1) the ease of use of each method, (2) the preferences of solutions obtained, and (3) the insight provided by the methodology into the nature and structure of the problem. The problem setting involved management students who were rquired to determine an acceptance sampling plan using both methods. The PCM provided the most preferred solutions and was considered easier to use and understand. The prior articulation of preference method was found to give more insight into the problem structure. The results suggest that a hybrid approach, combining both prior preference assessment and an interactive procedure exploiting the advantages of each, should be employed to solve multiple criteria problems.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130461623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent papers by Kirkpatrick et al., an analogy between the statistical mechanics of large multivariate physical systems and combinatorial optimization has been presented and used to develop a general strategy for solving discrete optimization problems. The method relies on probabilistically accepting intermediate increases in the objective function through a set of user‐controlled parameters. It is argued that by taking such controlled uphill steps, from time to time, a high quality solution can eventually be found in a moderate amount of computer time. In this paper, we implement this idea, apply it to the traveling salesman problem and the p‐median location problem, and test the approach extensively.
{"title":"Using simulated annealing to solve routing and location problems","authors":"B. Golden, C. C. Skiscim","doi":"10.1002/NAV.3800330209","DOIUrl":"https://doi.org/10.1002/NAV.3800330209","url":null,"abstract":"In recent papers by Kirkpatrick et al., an analogy between the statistical mechanics of large multivariate physical systems and combinatorial optimization has been presented and used to develop a general strategy for solving discrete optimization problems. The method relies on probabilistically accepting intermediate increases in the objective function through a set of user‐controlled parameters. It is argued that by taking such controlled uphill steps, from time to time, a high quality solution can eventually be found in a moderate amount of computer time. In this paper, we implement this idea, apply it to the traveling salesman problem and the p‐median location problem, and test the approach extensively.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125045366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In traditional static comparisons of two opposing forces, weapons systems on each side are added together after weighting them according to a weapon scoring system. The scoring system does not reflect the availability times of the weapon systems in a perceived conflict. In this paper it is suggested how availability times can be incorporated by introducing the net present value of force arrival patterns. The concept is extended to include the case where uncertainty concerning warning time is reflected through a probability distribution for the time of outbreak of hostilities.
{"title":"Use of discounted force arrivals in static force comparisons","authors":"R. Lorentzen","doi":"10.1002/NAV.3800330205","DOIUrl":"https://doi.org/10.1002/NAV.3800330205","url":null,"abstract":"In traditional static comparisons of two opposing forces, weapons systems on each side are added together after weighting them according to a weapon scoring system. The scoring system does not reflect the availability times of the weapon systems in a perceived conflict. In this paper it is suggested how availability times can be incorporated by introducing the net present value of force arrival patterns. The concept is extended to include the case where uncertainty concerning warning time is reflected through a probability distribution for the time of outbreak of hostilities.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121923595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We analyze the expected time penonnance of two versions of the thinning algorithm of Lewis and Shedler for generating random variates with a given hazard rate on [0,00). For thinning with fixed dominating hazard rate g(x) = c for example, it is shown that the expected number of iterations is cE(X) where X is the random variate tQat is produced. For DHR distributions, we can use dynamic thinning by adjusting the dominating hazard rate as we proceed. With the aid of some inequalities., we show that this improves the penonnance dramatically. For example, the expected number of iterations is bounded by a constant plus E(log+(h(O)X)) (the logarithmic moment of X).
{"title":"The analysis of some algorithms for generating random variates with a given hazard rate","authors":"L. Devroye","doi":"10.1002/NAV.3800330210","DOIUrl":"https://doi.org/10.1002/NAV.3800330210","url":null,"abstract":"We analyze the expected time penonnance of two versions of the thinning algorithm of Lewis and Shedler for generating random variates with a given hazard rate on [0,00). For thinning with fixed dominating hazard rate g(x) = c for example, it is shown that the expected number of iterations is cE(X) where X is the random variate tQat is produced. For DHR distributions, we can use dynamic thinning by adjusting the dominating hazard rate as we proceed. With the aid of some inequalities., we show that this improves the penonnance dramatically. For example, the expected number of iterations is bounded by a constant plus E(log+(h(O)X)) (the logarithmic moment of X).","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121272241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We implement a solution procedure for general convex separable programs where a series of relatively small piecewise linear programs are solved as opposed to a single large one, and where, based on bound calculations developed in [13] and [14], the ranges of linearization are systematically reduced for successive programs. The procedure inherits e-convergence to the global optimum in a finite number of steps, but perhaps its most distinct feature is the rigorous way in which ranges containing an optimal solution are reduced from iteration to iteration. This paper describes the procedure, called successive approximation, discusses its convergence, tightness of the bounds, bound-calculation overhead, and its robustness. It presents a computer implementation to demonstrate its effectiveness for general problems and compares it (1) with the more standard separable programming approach and (2) with one of the recent augmented Lagrangian methods [10] included in a comprehensive study of nonlinear programming codes [12]. It seems clear from over 130 cases resulting from 80 distinct problems studied here that significant savings in terms of computational effort can be realized by a judicious use of the procedure, and the ease with which it can be used is appreciably increased by the robustness it shows. Moreover, for most of these problems, the advantage increases as the size, nonlinearity, and the degree of desired accuracy increase. Other important benefits include significantly smaller storage requirements, the ability to estimate the error in the current solution, and to terminate the algorithm as soon as the acceptable level of accuracy has been achieved. Problems requiring up to about 10,000 nonzero elements in their specification and about 45,000 nonzero elements in the generated separable programs resulting from up to 70 original nonlinear variables and 70 nonlinear constraints are included in the computations.
{"title":"Successive approximation in separable programming: an improved procedure for convex separable programs","authors":"L. Thakur","doi":"10.1002/NAV.3800330213","DOIUrl":"https://doi.org/10.1002/NAV.3800330213","url":null,"abstract":"We implement a solution procedure for general convex separable programs where a series of relatively small piecewise linear programs are solved as opposed to a single large one, and where, based on bound calculations developed in [13] and [14], the ranges of linearization are systematically reduced for successive programs. The procedure inherits e-convergence to the global optimum in a finite number of steps, but perhaps its most distinct feature is the rigorous way in which ranges containing an optimal solution are reduced from iteration to iteration. This paper describes the procedure, called successive approximation, discusses its convergence, tightness of the bounds, bound-calculation overhead, and its robustness. It presents a computer implementation to demonstrate its effectiveness for general problems and compares it (1) with the more standard separable programming approach and (2) with one of the recent augmented Lagrangian methods [10] included in a comprehensive study of nonlinear programming codes [12]. It seems clear from over 130 cases resulting from 80 distinct problems studied here that significant savings in terms of computational effort can be realized by a judicious use of the procedure, and the ease with which it can be used is appreciably increased by the robustness it shows. Moreover, for most of these problems, the advantage increases as the size, nonlinearity, and the degree of desired accuracy increase. Other important benefits include significantly smaller storage requirements, the ability to estimate the error in the current solution, and to terminate the algorithm as soon as the acceptable level of accuracy has been achieved. Problems requiring up to about 10,000 nonzero elements in their specification and about 45,000 nonzero elements in the generated separable programs resulting from up to 70 original nonlinear variables and 70 nonlinear constraints are included in the computations.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125330710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-echelon logistic systems are essential parts of the service support function of high technology firms. The combination of technological developments and competitive pressures has led to the development of services systems with a unique set of characteristics. These characteristics include (1) low demand probabilities: (2) high cost items; (3) complex echelon structures; (4) existence of pooling mechanisms among stocking locations at the same echelon level; (5) high priority for service, which is often expressed in terms of response time service levels for product groups of items: (6) scrapping of failed parts; and (7) recycling of issued stock due to diagnostic use. This article develops a comprehensive model of a stochastic, multi-echelon inventory system that takes account of the above characteristics. Solutions to the constrained optimization problem are found using a branch and bound procedure. The results of applying this procedure to a spare parts inventory system for a computer manufacturer have led to a number of important policy conclusions.
{"title":"Optimal stocking policies for low usage items in multi‐echelon inventory systems","authors":"Morris A. Cohen, P. Kleindorfer, Hau L. Lee","doi":"10.1002/NAV.3800330103","DOIUrl":"https://doi.org/10.1002/NAV.3800330103","url":null,"abstract":"Multi-echelon logistic systems are essential parts of the service support function of high technology firms. The combination of technological developments and competitive pressures has led to the development of services systems with a unique set of characteristics. These characteristics include (1) low demand probabilities: (2) high cost items; (3) complex echelon structures; (4) existence of pooling mechanisms among stocking locations at the same echelon level; (5) high priority for service, which is often expressed in terms of response time service levels for product groups of items: (6) scrapping of failed parts; and (7) recycling of issued stock due to diagnostic use. This article develops a comprehensive model of a stochastic, multi-echelon inventory system that takes account of the above characteristics. Solutions to the constrained optimization problem are found using a branch and bound procedure. The results of applying this procedure to a spare parts inventory system for a computer manufacturer have led to a number of important policy conclusions.","PeriodicalId":431817,"journal":{"name":"Naval Research Logistics Quarterly","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1986-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130454863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}