A two-stage stochastic programming model is used to solve the electricity generation planning problem in South Africa for the period 2013 to 2050, in an attempt to minimise expected cost. Costs considered are capital and running costs. Unknown future electricity demand is the source of uncertainty represented by four scenarios with equal probabilities. The results show that the main contributors for new capacity are coal, wind, hydro and gas/diesel. The minimum costs obtained by solving the two-stage stochastic programming models range from R2 201 billion to R3 094 billion.
{"title":"Application of stochastic programming to electricity generation planning in South Africa","authors":"M. Bashe, M. Shuma-Iwisi, M. V. Wyk","doi":"10.5784/35-2-651","DOIUrl":"https://doi.org/10.5784/35-2-651","url":null,"abstract":"A two-stage stochastic programming model is used to solve the electricity generation planning problem in South Africa for the period 2013 to 2050, in an attempt to minimise expected cost. Costs considered are capital and running costs. Unknown future electricity demand is the source of uncertainty represented by four scenarios with equal probabilities. The results show that the main contributors for new capacity are coal, wind, hydro and gas/diesel. The minimum costs obtained by solving the two-stage stochastic programming models range from R2 201 billion to R3 094 billion.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"220 1","pages":"88-125"},"PeriodicalIF":0.0,"publicationDate":"2019-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75894333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper order batching is extended to a picking system with the layout of a unidirectional cyclical picking line. The objective is to minimise the walking distance of pickers in the picking line. The setup of the picking system under consideration is related to unidirectional carousel systems. Three order-to-route closeness metrics are introduced to approximate walking distance, since the orders will be batched before the pickers are routed. All metrics are based on the picking location describing when a picker has to stop at a location to collect the items for an order. These metrics comprise a number of stops, a number of non-identical stops and a stops ratio measurement. Besides exact solution approaches, four greedy heuristics as well as six metaheuristics are applied to combine similar orders in batches. All metrics are tested using real life data of 50 sample picking lines in a distribution centre of a prominent South African retailer. The capacity of the picking device is restricted, thus the maximum batch size of two orders per batch is allowed. The best combination of metric and solution approach is identified. A regression analysis supports the idea that the introduced metrics can be used to approximate walking distance. The combination of stops ratio metric and the greedy random heuristic generate the best results in terms of minimum number of total cycles traversed as well as computational time to find the solution.
{"title":"Picking location metrics for order batching on a unidirectional cyclical picking line","authors":"F. Hofmann, SE Visagie","doi":"10.5784/35-2-646","DOIUrl":"https://doi.org/10.5784/35-2-646","url":null,"abstract":"In this paper order batching is extended to a picking system with the layout of a unidirectional cyclical picking line. The objective is to minimise the walking distance of pickers in the picking line. The setup of the picking system under consideration is related to unidirectional carousel systems. Three order-to-route closeness metrics are introduced to approximate walking distance, since the orders will be batched before the pickers are routed. All metrics are based on the picking location describing when a picker has to stop at a location to collect the items for an order. These metrics comprise a number of stops, a number of non-identical stops and a stops ratio measurement. Besides exact solution approaches, four greedy heuristics as well as six metaheuristics are applied to combine similar orders in batches. All metrics are tested using real life data of 50 sample picking lines in a distribution centre of a prominent South African retailer. The capacity of the picking device is restricted, thus the maximum batch size of two orders per batch is allowed. The best combination of metric and solution approach is identified. A regression analysis supports the idea that the introduced metrics can be used to approximate walking distance. The combination of stops ratio metric and the greedy random heuristic generate the best results in terms of minimum number of total cycles traversed as well as computational time to find the solution.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"12 1","pages":"161-186"},"PeriodicalIF":0.0,"publicationDate":"2019-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84998263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The exponential distribution plays a key role in the practical application of reliability theory, survival analysis, engineering and queuing theory. These applications often rely on the underlying assumption that the observed data originate from an exponential distribution. In this paper, two new tests for exponentiality are proposed, which are based on a conditional second moment characterisation. The proposed tests are compared to various established tests for exponentiality by means of a simulation study where it is found that the new tests perform favourably relative to the existing tests. The tests are also applied to real-world data sets with independent and identically distributed data as well as to simulated data from a Cox proportional hazards model, to determine whether the residuals obtained from the fitted model follow a standard exponential distribution.
{"title":"New goodness-of-fit test for exponentiality based on a conditional moment characterisation","authors":"M. Smuts, J. Allison, L. Santana","doi":"10.5784/35-2-661","DOIUrl":"https://doi.org/10.5784/35-2-661","url":null,"abstract":"The exponential distribution plays a key role in the practical application of reliability theory, survival analysis, engineering and queuing theory. These applications often rely on the underlying assumption that the observed data originate from an exponential distribution. In this paper, two new tests for exponentiality are proposed, which are based on a conditional second moment characterisation. The proposed tests are compared to various established tests for exponentiality by means of a simulation study where it is found that the new tests perform favourably relative to the existing tests. The tests are also applied to real-world data sets with independent and identically distributed data as well as to simulated data from a Cox proportional hazards model, to determine whether the residuals obtained from the fitted model follow a standard exponential distribution.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81038763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid development and proliferation of global positioning system (GPS)-enabled systems and devices have led to a significant increase in the availability of transport data, more specifically GPS trajectories, that can be used in researching vehicle activities. In order to save data storage- and handling costs many vehicle tracking systems only store low-frequency trajectories for vehicles. A number of existing methods used to map GPS trajectories to a digital road network were analysed and such an algorithm was implemented in Multi-Agent Transport Simulation (MATSim), an open source collaborative simulation package for Java. The map-matching algorithm was tested on a simple grid network and a real and extensive network of the City of Cape Town, South Africa. Experimentation showed the network size has the biggest influence on algorithm execution time and that a network must be reduced to include only the links that the vehicle most likely traversed. The algorithm is not suited for trajectories with sampling rates less than 5 seconds as it can result in unrealistic paths chosen, but it manages to obtain accuracies of around 80% up until sampling sizes of around 50 seconds whereafter the accuracy decreases. Further experimentation also revealed optimal algorithm parameters for matching trajectories on the Cape Town network. The use case for the implementation was to infer basic vehicle travel information, such as route travelled and speed of travel, for municipal waste collection vehicles in the City of Cape Town, South Africa.
支持全球定位系统(GPS)的系统和设备的迅速发展和扩散,导致交通数据的可用性显著增加,更具体地说是GPS轨迹,可用于研究车辆活动。为了节省数据存储和处理成本,许多车辆跟踪系统只存储车辆的低频轨迹。分析了用于将GPS轨迹映射到数字道路网络的许多现有方法,并在Multi-Agent Transport Simulation (MATSim)中实现了这种算法,这是一个开源的Java协作仿真包。地图匹配算法在一个简单的网格网络和一个真实的、广泛的南非开普敦城市网络上进行了测试。实验表明,网络大小对算法执行时间的影响最大,网络必须缩小到只包括车辆最有可能经过的链接。该算法不适合采样率小于5秒的轨迹,因为它可能导致选择不切实际的路径,但它设法获得80%左右的精度,直到采样大小约50秒,此后精度下降。进一步的实验还揭示了在开普敦网络上匹配轨迹的最佳算法参数。该实现的用例是推断南非开普敦市城市垃圾收集车辆的基本车辆行驶信息,例如行驶路线和行驶速度。
{"title":"Development of a map-matching algorithm for dynamic-sampling-rate GPS signals to determine vehicle routes on a MATSim network","authors":"Jb Vosloo, J. Joubert","doi":"10.5784/35-1-636","DOIUrl":"https://doi.org/10.5784/35-1-636","url":null,"abstract":"The rapid development and proliferation of global positioning system (GPS)-enabled systems and devices have led to a significant increase in the availability of transport data, more specifically GPS trajectories, that can be used in researching vehicle activities. In order to save data storage- and handling costs many vehicle tracking systems only store low-frequency trajectories for vehicles. A number of existing methods used to map GPS trajectories to a digital road network were analysed and such an algorithm was implemented in Multi-Agent Transport Simulation (MATSim), an open source collaborative simulation package for Java. The map-matching algorithm was tested on a simple grid network and a real and extensive network of the City of Cape Town, South Africa. Experimentation showed the network size has the biggest influence on algorithm execution time and that a network must be reduced to include only the links that the vehicle most likely traversed. The algorithm is not suited for trajectories with sampling rates less than 5 seconds as it can result in unrealistic paths chosen, but it manages to obtain accuracies of around 80% up until sampling sizes of around 50 seconds whereafter the accuracy decreases. Further experimentation also revealed optimal algorithm parameters for matching trajectories on the Cape Town network. The use case for the implementation was to infer basic vehicle travel information, such as route travelled and speed of travel, for municipal waste collection vehicles in the City of Cape Town, South Africa.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74120248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A technique known as calibration is often used when a given option pricing model is fitted to observed financial data. This entails choosing the parameters of the model so as to minimise some discrepancy measure between the observed option prices and the prices calculated under the model in question. This procedure does not take the historical values of the underlying asset into account. In this paper, the density function of the log-returns obtained using the calibration procedure is compared to a density estimate of the observed historical log-returns. Three models within the class of geometric Lévy process models are fitted to observed data; the Black-Scholes model as well as the geometric normal inverse Gaussian and Meixner process models. The numerical results obtained show a surprisingly large discrepancy between the resulting densities when using the latter two models. An adaptation of the calibration methodology is also proposed based on both option price data and the observed historical log-returns of the underlying asset. The implementation of this methodology limits the discrepancy between the densities in question.
{"title":"On the discrepancy between the objective and risk neutral densities in the pricing of European options","authors":"I. Visagie, G. Grobler","doi":"10.5784/35-1-647","DOIUrl":"https://doi.org/10.5784/35-1-647","url":null,"abstract":"A technique known as calibration is often used when a given option pricing model is fitted to observed financial data. This entails choosing the parameters of the model so as to minimise some discrepancy measure between the observed option prices and the prices calculated under the model in question. This procedure does not take the historical values of the underlying asset into account. In this paper, the density function of the log-returns obtained using the calibration procedure is compared to a density estimate of the observed historical log-returns. Three models within the class of geometric Lévy process models are fitted to observed data; the Black-Scholes model as well as the geometric normal inverse Gaussian and Meixner process models. The numerical results obtained show a surprisingly large discrepancy between the resulting densities when using the latter two models. An adaptation of the calibration methodology is also proposed based on both option price data and the observed historical log-returns of the underlying asset. The implementation of this methodology limits the discrepancy between the densities in question.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90064280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fixed-time control and vehicle-actuated control are two distinct types of traffic signal control. The latter control method involves switching traffic signals based on detected traffic flows and thus offers more flexibility (appropriate for lighter traffic conditions) than the former, which relies solely on cyclic, predetermined signal phases that are better suited for heavier traffic conditions. The notion of self-organisation has relatively recently been proposed as an alternative approach towards improving traffic signal control, particularly under light traffic conditions, due to its flexible nature and its potential to result in emergent behaviour. The effectiveness of five existing self-organising traffic signal control strategies from the literature and a fixed-control strategy are compared in this paper within a newly designed agent-based, microscopic traffic simulation model. Various shortcomings of three of these algorithms are identified and algorithmic improvements are suggested to remedy these deficiencies. The relative performance improvements resulting from these algorithmic modifications are then quantified by their implementation in the aforementioned traffic simulation model. Finally, a new self-organising algorithm is proposed that is particularly effective under lighter traffic conditions.
{"title":"Self-organisation in traffic signal control algorithms under light traffic conditions","authors":"Sj Movius, JH van Vuuren","doi":"10.5784/35-1-605","DOIUrl":"https://doi.org/10.5784/35-1-605","url":null,"abstract":"Fixed-time control and vehicle-actuated control are two distinct types of traffic signal control. The latter control method involves switching traffic signals based on detected traffic flows and thus offers more flexibility (appropriate for lighter traffic conditions) than the former, which relies solely on cyclic, predetermined signal phases that are better suited for heavier traffic conditions. The notion of self-organisation has relatively recently been proposed as an alternative approach towards improving traffic signal control, particularly under light traffic conditions, due to its flexible nature and its potential to result in emergent behaviour. The effectiveness of five existing self-organising traffic signal control strategies from the literature and a fixed-control strategy are compared in this paper within a newly designed agent-based, microscopic traffic simulation model. Various shortcomings of three of these algorithms are identified and algorithmic improvements are suggested to remedy these deficiencies. The relative performance improvements resulting from these algorithmic modifications are then quantified by their implementation in the aforementioned traffic simulation model. Finally, a new self-organising algorithm is proposed that is particularly effective under lighter traffic conditions.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86178348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A direct or indirect modelling methodology can be used to predict Loss Given Default (LGD). When using the indirect LGD methodology, two components exist, namely, the loss severity component and the probability component. Commonly used models to predict the loss severity and the probability component are the haircut- and the logistic regression models, respectively. In this article, survival analysis was proposed as an improvement to the more traditional logistic regression method. The mean squared error, bias and variance for the two methodologies were compared and it was shown that the use of survival analysis enhanced the model's predictive power. The proposed LGD methodology (using survival analysis) was applied on two simulated datasets and two retail bank datasets, and according to the results obtained it outperformed the logistic regression LGD methodology. Additional benefits included that the new methodology could allow for censoring as well as predicting probabilities over varying outcome periods.
{"title":"Making use of survival analysis to indirectly model loss given default","authors":"Morné Joubert, T. Verster, H. Raubenheimer","doi":"10.5784/34-2-588","DOIUrl":"https://doi.org/10.5784/34-2-588","url":null,"abstract":"A direct or indirect modelling methodology can be used to predict Loss Given Default (LGD). When using the indirect LGD methodology, two components exist, namely, the loss severity component and the probability component. Commonly used models to predict the loss severity and the probability component are the haircut- and the logistic regression models, respectively. In this article, survival analysis was proposed as an improvement to the more traditional logistic regression method. The mean squared error, bias and variance for the two methodologies were compared and it was shown that the use of survival analysis enhanced the model's predictive power. The proposed LGD methodology (using survival analysis) was applied on two simulated datasets and two retail bank datasets, and according to the results obtained it outperformed the logistic regression LGD methodology. Additional benefits included that the new methodology could allow for censoring as well as predicting probabilities over varying outcome periods.","PeriodicalId":30587,"journal":{"name":"ORiON","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89940812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}