In a military surface-based air defence environment, a fire control officer typically employs a computerised weapon assignment decision support subsystem to aid him in the assignment of available surface-based weapon systems to engage aerial threats in an attempt to protect defended surface assets - a problem known in the military operations research literature as the weapon assignment problem. In this paper, a tri-objective, dynamic weapon assignment model is proposed by modelling the weapon assignment problem as a multi-objective variation of the celebrated vehicle routing problem with time windows. A multi-objective, evolutionary metaheuristic for solving the vehicle routing problem with time windows is used to solve the model. The workability of this modelling approach is illustrated by solving the model in the context of a simulated, surface-based air defence scenario.
{"title":"A tri-objective, dynamic weapon assignment model for surface-based air defence","authors":"DP Lötter, Jh van Vuuren","doi":"10.5784/32-1-522","DOIUrl":"https://doi.org/10.5784/32-1-522","url":null,"abstract":"In a military surface-based air defence environment, a fire control officer typically employs a computerised weapon assignment decision support subsystem to aid him in the assignment of available surface-based weapon systems to engage aerial threats in an attempt to protect defended surface assets - a problem known in the military operations research literature as the weapon assignment problem. In this paper, a tri-objective, dynamic weapon assignment model is proposed by modelling the weapon assignment problem as a multi-objective variation of the celebrated vehicle routing problem with time windows. A multi-objective, evolutionary metaheuristic for solving the vehicle routing problem with time windows is used to solve the model. The workability of this modelling approach is illustrated by solving the model in the context of a simulated, surface-based air defence scenario.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"25 1","pages":"1-22"},"PeriodicalIF":0.0,"publicationDate":"2016-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82843095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The method introduced in this paper extends the trim-loss problem or also known as 2D rectangular SLOPP to the multiple sheet situation where N same size two-dimensional sheets have to be cut optimally producing demand items that partially or totally satisfy the requirements of a given order. The cutting methodology is constrained to be of the guillotine type and rotation of pieces is allowed. Sets of patterns are generated in a sequential way. For each set found, an integer program is solved to produce a feasible or sometimes optimal solution to the N-sheet problem if possible. If a feasible solution cannot be identified, the waste acceptance tolerance is relaxed somewhat until solutions are obtained. Sets of cutting patterns consisting of N cutting patterns, one for each of the N sheets, is then analysed for optimality using criteria developed here. This process continues until an optimal solution is identified. Finally, it is indicated how a given order of demand items can be totally satisfied in an optimal way by identifying the smallest N and associated cutting patterns to minimize wastage. Empirical results are reported on a set of 120 problem instances based on well known problems from the literature. The results reported for this data set of problems suggest the feasibility of this approach to optimize the cutting stock problem over more than one same size stock sheet. The main contribution of this research shows the details of an extension of the Wang methodology to obtain and prove exact solutions for the multiple same size stock sheet case.
{"title":"An exact algorithm for the N-sheet two dimensional single stock-size cutting stock problem","authors":"T. Steyn, J. Hattingh","doi":"10.5784/31-2-527","DOIUrl":"https://doi.org/10.5784/31-2-527","url":null,"abstract":"The method introduced in this paper extends the trim-loss problem or also known as 2D rectangular SLOPP to the multiple sheet situation where N same size two-dimensional sheets have to be cut optimally producing demand items that partially or totally satisfy the requirements of a given order. The cutting methodology is constrained to be of the guillotine type and rotation of pieces is allowed. Sets of patterns are generated in a sequential way. For each set found, an integer program is solved to produce a feasible or sometimes optimal solution to the N-sheet problem if possible. If a feasible solution cannot be identified, the waste acceptance tolerance is relaxed somewhat until solutions are obtained. Sets of cutting patterns consisting of N cutting patterns, one for each of the N sheets, is then analysed for optimality using criteria developed here. This process continues until an optimal solution is identified. Finally, it is indicated how a given order of demand items can be totally satisfied in an optimal way by identifying the smallest N and associated cutting patterns to minimize wastage. Empirical results are reported on a set of 120 problem instances based on well known problems from the literature. The results reported for this data set of problems suggest the feasibility of this approach to optimize the cutting stock problem over more than one same size stock sheet. The main contribution of this research shows the details of an extension of the Wang methodology to obtain and prove exact solutions for the multiple same size stock sheet case.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"282 1","pages":"77-94"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76808699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Similar to the constrained facility location problem, the passive optical network (PON) planning problem necessitates the search for a subset of deployed facilities (splitters) and their allocated demand points (optical network units) to minimise the overall deployment cost. A mixed integer linear programming formulation stemming from network flow optimisation is used to construct a heuristic based on limiting the total number of interconnecting paths when implementing fibre duct sharing. A disintegration heuristic is proposed based on the output of a centroid, density-based and a hybrid clustering algorithm to reduce the time complexity while ensuring close to optimal results. The proposed heuristics are then evaluated using a large real-world dataset, showing favourable performance.
{"title":"Heuristic approach to the passive optical network with fibre duct sharing planning problem","authors":"van Loggerenberg, MJ Grobler, S. Terblanche","doi":"10.5784/31-2-532","DOIUrl":"https://doi.org/10.5784/31-2-532","url":null,"abstract":"Similar to the constrained facility location problem, the passive optical network (PON) planning problem necessitates the search for a subset of deployed facilities (splitters) and their allocated demand points (optical network units) to minimise the overall deployment cost. A mixed integer linear programming formulation stemming from network flow optimisation is used to construct a heuristic based on limiting the total number of interconnecting paths when implementing fibre duct sharing. A disintegration heuristic is proposed based on the output of a centroid, density-based and a hybrid clustering algorithm to reduce the time complexity while ensuring close to optimal results. The proposed heuristics are then evaluated using a large real-world dataset, showing favourable performance.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"102 1","pages":"95-110"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87447071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial to Volume 31(2)","authors":"S. E. Visagie","doi":"10.5784/31-2-551","DOIUrl":"https://doi.org/10.5784/31-2-551","url":null,"abstract":"","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89761445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A real life order picking system consisting of a set of unidirectional picking lines is inves- tigated. Batches of stock keeping units (SKUs) are processed in waves dened as a set of SKUs and their corresponding store requirements. Each wave is processed independently on one of the parallel picking lines as pickers walk in a clockwise direction picking stock. Once all the orders for a wave are completed a new mutually exclusive set of SKUs are brought to the picking line for a new wave. SKUs which dier only in size classication, for example small, medium and large shirts, are grouped together into distributions (DBNs) and must be picked in the same wave. The assignment of DBNs to available picking lines for a single day of picking is considered in this paper. Dierent assignments of DBNs to picking lines are evaluated using three measures, namely total walking distance, the number of resulting small cartons and work balance. Several approaches to assign DBNs to picking lines have been in- vestigated in literature. All of these approaches seek to minimise walking distance only and include mathematical formulations and greedy heuristics. Four dierent correlation measure are introduced in this paper to reduce the number of small cartons produced and reduce walking distance simultaneously. These correlation measures are used in a greedy insertion algorithm. The correlation measures were compared to historical assignments as well as a greedy approach which is known to address walking distances eectively. Using correlation measures to assign DBNs to picking lines reduces the total walking distance of pickers by 20% compared to the historical assignments. This is similar to the greedy approach which only considers walking distance as an objective, however, using correlations reduced the number of small cartons produced by the greedy approach. Key words : SKU assignment, order picking, assignment problems, combinatorial optimisation.
{"title":"SKU assignment to unidirectional picking lines using correlations","authors":"J. Matthews, S. E. Visagie","doi":"10.5784/31-2-531","DOIUrl":"https://doi.org/10.5784/31-2-531","url":null,"abstract":"A real life order picking system consisting of a set of unidirectional picking lines is inves- tigated. Batches of stock keeping units (SKUs) are processed in waves dened as a set of SKUs and their corresponding store requirements. Each wave is processed independently on one of the parallel picking lines as pickers walk in a clockwise direction picking stock. Once all the orders for a wave are completed a new mutually exclusive set of SKUs are brought to the picking line for a new wave. SKUs which dier only in size classication, for example small, medium and large shirts, are grouped together into distributions (DBNs) and must be picked in the same wave. The assignment of DBNs to available picking lines for a single day of picking is considered in this paper. Dierent assignments of DBNs to picking lines are evaluated using three measures, namely total walking distance, the number of resulting small cartons and work balance. Several approaches to assign DBNs to picking lines have been in- vestigated in literature. All of these approaches seek to minimise walking distance only and include mathematical formulations and greedy heuristics. Four dierent correlation measure are introduced in this paper to reduce the number of small cartons produced and reduce walking distance simultaneously. These correlation measures are used in a greedy insertion algorithm. The correlation measures were compared to historical assignments as well as a greedy approach which is known to address walking distances eectively. Using correlation measures to assign DBNs to picking lines reduces the total walking distance of pickers by 20% compared to the historical assignments. This is similar to the greedy approach which only considers walking distance as an objective, however, using correlations reduced the number of small cartons produced by the greedy approach. Key words : SKU assignment, order picking, assignment problems, combinatorial optimisation.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"20 1","pages":"61-76"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82571559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial to Volume 31(1)","authors":"S. E. Visagie","doi":"10.5784/31-1-535","DOIUrl":"https://doi.org/10.5784/31-1-535","url":null,"abstract":"","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80280669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. D. Jongh, E. D. Jongh, M. Pienaar, H. Gordon-Grant, M. Oberholzer, L. Santana
Standard Bank, South Africa, currently employs a methodology when developing application or behavioural scorecards that involves logistic regression. A key aspect of building logistic regression models entails variable selection which involves dealing with multicollinearity. The objective of this study was to investigate the impact of using different variance inflation factor (VIF) thresholds on the performance of these models in a predictive and discriminatory context and to study the stability of the estimated coefficients in order to advise the bank. The impact of the choice of VIF thresholds was researched by means of an empirical and simulation study. The empirical study involved analysing two large data sets that represent the typical size encountered in a retail credit scoring context. The first analysis concentrated on fitting the various VIF models and comparing the fitted models in terms of the stability of coefficient estimates and goodness-of-fit statistics while the second analysis focused on evaluating the fitted models' predictive ability over time. The simulation study was used to study the effect of multicollinearity in a controlled setting. All the above-mentioned studies indicate that the presence of multicollinearity in large data sets is of much less concern than in small data sets and that the VIF criterion could be relaxed considerably when models are fitted to large data sets. The recommendations in this regard have been accepted and implemented by Standard Bank.
{"title":"The impact of pre-selected variance inflation factor thresholds on the stability and predictive power of logistic regression models in credit scoring","authors":"P. D. Jongh, E. D. Jongh, M. Pienaar, H. Gordon-Grant, M. Oberholzer, L. Santana","doi":"10.5784/31-1-162","DOIUrl":"https://doi.org/10.5784/31-1-162","url":null,"abstract":"Standard Bank, South Africa, currently employs a methodology when developing application or behavioural scorecards that involves logistic regression. A key aspect of building logistic regression models entails variable selection which involves dealing with multicollinearity. The objective of this study was to investigate the impact of using different variance inflation factor (VIF) thresholds on the performance of these models in a predictive and discriminatory context and to study the stability of the estimated coefficients in order to advise the bank. The impact of the choice of VIF thresholds was researched by means of an empirical and simulation study. The empirical study involved analysing two large data sets that represent the typical size encountered in a retail credit scoring context. The first analysis concentrated on fitting the various VIF models and comparing the fitted models in terms of the stability of coefficient estimates and goodness-of-fit statistics while the second analysis focused on evaluating the fitted models' predictive ability over time. The simulation study was used to study the effect of multicollinearity in a controlled setting. All the above-mentioned studies indicate that the presence of multicollinearity in large data sets is of much less concern than in small data sets and that the VIF criterion could be relaxed considerably when models are fitted to large data sets. The recommendations in this regard have been accepted and implemented by Standard Bank.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"1 1","pages":"17-37"},"PeriodicalIF":0.0,"publicationDate":"2015-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88907067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research in the domain of school timetabling has essentially focused on applying various techniques such as integer programming, constraint satisfaction, simulated annealing, tabu search and genetic algorithms to calculate a solution to the problem. Optimization techniques like simulated annealing, tabu search and genetic algorithms generally explore a solution space. Hyper-heuristics, on the other hand, search a heuristic space with the aim of providing a more generalized solution to the particular optimisation problem. This is a fairly new technique that has proven to be successful in solving various combinatorial optimisation problems. There has not been much research into the use of hyper-heuristics to solve the school timetabling problem. This study investigates the use of a genetic algorithm selection perturbative hyper-heuristic for solving the school timetabling problem. A two-phased approach is taken, with the first phase focusing on hard constraints, and the second on soft constraints. The genetic algorithm uses tournament selection to choose parents, to which the mutation and crossover operators are applied. The genetic algorithm selection perturbative hyper-heuristic (GASPHH) was applied to five different school timetabling problems. The performance of the hyper-heuristic was compared to that of other methods applied to these problems, including a genetic algorithm that was applied directly to the solution space. GASPHH performed well over all five different types of school timetabling problems.
{"title":"A genetic algorithm selection perturbative hyper-heuristic for solving the school timetabling problem","authors":"Rushil Raghavjee, N. Pillay","doi":"10.5784/31-1-158","DOIUrl":"https://doi.org/10.5784/31-1-158","url":null,"abstract":"Research in the domain of school timetabling has essentially focused on applying various techniques such as integer programming, constraint satisfaction, simulated annealing, tabu search and genetic algorithms to calculate a solution to the problem. Optimization techniques like simulated annealing, tabu search and genetic algorithms generally explore a solution space. Hyper-heuristics, on the other hand, search a heuristic space with the aim of providing a more generalized solution to the particular optimisation problem. This is a fairly new technique that has proven to be successful in solving various combinatorial optimisation problems. There has not been much research into the use of hyper-heuristics to solve the school timetabling problem. This study investigates the use of a genetic algorithm selection perturbative hyper-heuristic for solving the school timetabling problem. A two-phased approach is taken, with the first phase focusing on hard constraints, and the second on soft constraints. The genetic algorithm uses tournament selection to choose parents, to which the mutation and crossover operators are applied. The genetic algorithm selection perturbative hyper-heuristic (GASPHH) was applied to five different school timetabling problems. The performance of the hyper-heuristic was compared to that of other methods applied to these problems, including a genetic algorithm that was applied directly to the solution space. GASPHH performed well over all five different types of school timetabling problems.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"20 1","pages":"39-60"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87137171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial to Volume 30(2)","authors":"S. E. Visagie","doi":"10.5784/30-2-526","DOIUrl":"https://doi.org/10.5784/30-2-526","url":null,"abstract":"","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75137220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The willingness of a customer to pay for a product or service is mathematically captured by a price elasticity model. The model relates the responsiveness of customers to a change in the quoted price. In addition to overall price sensitivity, adverse selection could be observed whereby certain customer segments react differently towards price changes. In this paper the problem of determining optimal prices to quote prospective customers in credit retail is addressed such that the interest income to the lender will be maximised while taking price sensitivity and adverse selection into account. For this purpose a response model is suggested that overcomes non-concavity and unrealistic asymptotic behaviour which allows for a linearisation approach of the non-linear price optimisation problem. A two-stage linear stochastic programming formulation is suggested for the optimisation of prices while taking uncertainty in future price sensitivity into account. Empirical results are based on real data from a financial institution.
{"title":"Credit price optimisation within retail banking","authors":"S. Terblanche, T. Rey","doi":"10.5784/30-2-160","DOIUrl":"https://doi.org/10.5784/30-2-160","url":null,"abstract":"The willingness of a customer to pay for a product or service is mathematically captured by a price elasticity model. The model relates the responsiveness of customers to a change in the quoted price. In addition to overall price sensitivity, adverse selection could be observed whereby certain customer segments react differently towards price changes. In this paper the problem of determining optimal prices to quote prospective customers in credit retail is addressed such that the interest income to the lender will be maximised while taking price sensitivity and adverse selection into account. For this purpose a response model is suggested that overcomes non-concavity and unrealistic asymptotic behaviour which allows for a linearisation approach of the non-linear price optimisation problem. A two-stage linear stochastic programming formulation is suggested for the optimisation of prices while taking uncertainty in future price sensitivity into account. Empirical results are based on real data from a financial institution.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"12 1","pages":"85-102"},"PeriodicalIF":0.0,"publicationDate":"2014-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74727533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}