Maria A. M. Trindade, P. S. Sousa, Maria R. A. Moreira
This paper proposes a zero-one quadratic assignment model for dealing with the storage location assignment problem when there are weight constraints. Our analysis shows that operations can be improved using our model. When comparing the strategy currently used in a real-life company with the designed model, we found that the new placement of products allowed a reduction of up to 22% on the picking distance. This saving is higher than that achieved with the creation of density zones, a procedure commonly used to deal with weight constraints, according to the literature.
{"title":"Defining a storage-assignment strategy for precedence-constrained order picking","authors":"Maria A. M. Trindade, P. S. Sousa, Maria R. A. Moreira","doi":"10.37190/ord210207","DOIUrl":"https://doi.org/10.37190/ord210207","url":null,"abstract":"This paper proposes a zero-one quadratic assignment model for dealing with the storage location assignment problem when there are weight constraints. Our analysis shows that operations can be improved using our model. When comparing the strategy currently used in a real-life company with the designed model, we found that the new placement of products allowed a reduction of up to 22% on the picking distance. This saving is higher than that achieved with the creation of density zones, a procedure commonly used to deal with weight constraints, according to the literature.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"77 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80641465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The burden of external debt affects the wellbeing of an economy (or a country) by making the economy vulnerable to external shocks and crowding out investment. When dealing with debt management in indebted poor countries like Nigeria, the rational approach is to allocate a portion of export earnings for debt service payments. Along this line, there is a need to identify the link between debt servicing and export earnings. Hence, the current and long-run effects of export earnings on debt service payments is modelled as a single-input-single-output discrete-time dynamical system within the framework of Autoregressive Moving Average Explanatory Input model of the Koyck-kind (KARMAX). The KARMAX model is identified for Nigeria, using data from the World Bank database from 1970 to 2018 based on the maximum likelihood (ML) method, and the obtained results are compared to the prediction error and the instrumental variable methods. From a theoretical perspective, the KARMAX specification identified by the ML method is more ideal and inspiring. By doing so, this article contributes to the literature on the econometrics of public debt management.
{"title":"Inspecting debt servicing mechanism in Nigeria using ARMAX model of the Koyck-kind","authors":"U. Virtue, David E. Omoregie","doi":"10.37190/ORD210101","DOIUrl":"https://doi.org/10.37190/ORD210101","url":null,"abstract":"The burden of external debt affects the wellbeing of an economy (or a country) by making the economy vulnerable to external shocks and crowding out investment. When dealing with debt management in indebted poor countries like Nigeria, the rational approach is to allocate a portion of export earnings for debt service payments. Along this line, there is a need to identify the link between debt servicing and export earnings. Hence, the current and long-run effects of export earnings on debt service payments is modelled as a single-input-single-output discrete-time dynamical system within the framework of Autoregressive Moving Average Explanatory Input model of the Koyck-kind (KARMAX). The KARMAX model is identified for Nigeria, using data from the World Bank database from 1970 to 2018 based on the maximum likelihood (ML) method, and the obtained results are compared to the prediction error and the instrumental variable methods. From a theoretical perspective, the KARMAX specification identified by the ML method is more ideal and inspiring. By doing so, this article contributes to the literature on the econometrics of public debt management.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"22 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81151980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azarnoosh Kafi, B. Daneshian, M. Rostamy-Malkhalifeh
Data Envelopment Analysis (DEA) is a well-known method that based on inputs and outputs calculates the efficiency of decision-making units (DMUs). Comparing the efficiency and ranking of DMUs in different time periods lets the decision makers to prevent any loss in the productivity of units and improve the production planning. Despite the merits of DEA models, they are not able to forecast the efficiency of future time periods with known input/output records of the DMUs. With this end in view, this study aims at proposing a forecasting algorithm with a 95% confidence interval to generate fuzzy data sets for future time periods. Moreover, managers’ opinions are inserted in the proposed forecasting model. Equipped with the forecasted data sets and with respect to the data sets from previous periods, this model can rightly forecast the efficiency of the future time periods. The proposed procedure also employs the simple geometric mean to discriminate between efficient units. Examples from a real case including 20 automobile firms show the applicability of the proposed algorithm.
{"title":"Forecasting the Confidence Interval of Efficiency in Fuzzy DEA","authors":"Azarnoosh Kafi, B. Daneshian, M. Rostamy-Malkhalifeh","doi":"10.37190/ORD210103","DOIUrl":"https://doi.org/10.37190/ORD210103","url":null,"abstract":"Data Envelopment Analysis (DEA) is a well-known method that based on inputs and outputs calculates the efficiency of decision-making units (DMUs). Comparing the efficiency and ranking of DMUs in different time periods lets the decision makers to prevent any loss in the productivity of units and improve the production planning. Despite the merits of DEA models, they are not able to forecast the efficiency of future time periods with known input/output records of the DMUs. With this end in view, this study aims at proposing a forecasting algorithm with a 95% confidence interval to generate fuzzy data sets for future time periods. Moreover, managers’ opinions are inserted in the proposed forecasting model. Equipped with the forecasted data sets and with respect to the data sets from previous periods, this model can rightly forecast the efficiency of the future time periods. The proposed procedure also employs the simple geometric mean to discriminate between efficient units. Examples from a real case including 20 automobile firms show the applicability of the proposed algorithm.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"68 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81190282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the business world, it is generally observed that the supplier gives cash discount due to advance payment. The buyer may either pay off the total purchase cost or a fraction of the total purchase cost before receiving the products. If the buyer makes full payment then he receives a cash discount instantly. If the buyer pays a fraction of the total purchase cost, then (s)he receives the cash discount while paying the remaining amount at the time of receiving the lot. Moreover, in most of the inventory models, it is generally assumed that the delivered lot contains only perfect items. But in reality, presence of imperfect items in the received lot cannot be overlooked as it will affect total profit of the system. Thus, the study of inventory models considering the presence of imperfect items in the lot makes the model more realistic and, it has received much attention from inventory managers. This paper develops a model that jointly considers imperfect quality items and the concept of advance payment scheme (full and partial). The objective is to determine optimal ordering quantity in order to maximise the total profit of the system. The necessary theoretical results showing the existence of global maximum is derived. The model is illustrated with the help of numerical examples, and sensitivity analysis is carried out on some important system parameters to see the effects on the total profit of the system. The study shows that full advance payment scheme is beneficial for the buyer.
{"title":"An inventory model for deteriorating items with imperfect quality under advance payment policy","authors":"B. Nath, Nabendu Sen","doi":"10.37190/ord210306","DOIUrl":"https://doi.org/10.37190/ord210306","url":null,"abstract":"In the business world, it is generally observed that the supplier gives cash discount due to advance payment. The buyer may either pay off the total purchase cost or a fraction of the total purchase cost before receiving the products. If the buyer makes full payment then he receives a cash discount instantly. If the buyer pays a fraction of the total purchase cost, then (s)he receives the cash discount while paying the remaining amount at the time of receiving the lot. Moreover, in most of the inventory models, it is generally assumed that the delivered lot contains only perfect items. But in reality, presence of imperfect items in the received lot cannot be overlooked as it will affect total profit of the system. Thus, the study of inventory models considering the presence of imperfect items in the lot makes the model more realistic and, it has received much attention from inventory managers. This paper develops a model that jointly considers imperfect quality items and the concept of advance payment scheme (full and partial). The objective is to determine optimal ordering quantity in order to maximise the total profit of the system. The necessary theoretical results showing the existence of global maximum is derived. The model is illustrated with the help of numerical examples, and sensitivity analysis is carried out on some important system parameters to see the effects on the total profit of the system. The study shows that full advance payment scheme is beneficial for the buyer.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"50 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88370468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper attempts to classify the main areas of threats occurring in enterprises in the information management processes. Particular attention was attracted to the effect of the human factor which is present in virtually every area of information security management. The author specifies the threats due to the IT techniques and technologies used and the models of information systems present in business entities. The empirical part of the paper presents and describes the research conducted by the author on information security in business organisations using the traditional IT model and the cloud computing model. The results obtained for both IT models are compared.
{"title":"Human factor aspects in information security management in the traditional IT and cloud computing models","authors":"Paweł Kobis","doi":"10.37190/ORD210104","DOIUrl":"https://doi.org/10.37190/ORD210104","url":null,"abstract":"This paper attempts to classify the main areas of threats occurring in enterprises in the information management processes. Particular attention was attracted to the effect of the human factor which is present in virtually every area of information security management. The author specifies the threats due to the IT techniques and technologies used and the models of information systems present in business entities. The empirical part of the paper presents and describes the research conducted by the author on information security in business organisations using the traditional IT model and the cloud computing model. The results obtained for both IT models are compared.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"1 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82425719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Staudacher, L. Kóczy, Izabella Stach, Jan Filipp, Marcus Kramer, Till Noffke, Linus Olsson, Jonas Pichler, Tobias Singer
We study the efficient computation of power indices for weighted voting games using the paradigm of dynamic programming. We survey the state-of-the-art algorithms for computing the Banzhaf and Shapley-Shubik indices and point out how these approaches carry over to related power indices. Within a unified framework, we present new efficient algorithms for the Public Good index and a recently proposed power index based on minimal winning coalitions of smallest size, as well as a very first method for computing Johnston indices for weighted voting games efficiently. We introduce a software package providing fast C++ implementations of all the power indices mentioned in this article, discuss computing times, as well as storage requirements.
{"title":"Computing power indices for weighted voting games via dynamic programming","authors":"J. Staudacher, L. Kóczy, Izabella Stach, Jan Filipp, Marcus Kramer, Till Noffke, Linus Olsson, Jonas Pichler, Tobias Singer","doi":"10.37190/ord210206","DOIUrl":"https://doi.org/10.37190/ord210206","url":null,"abstract":"We study the efficient computation of power indices for weighted voting games using the paradigm of dynamic programming. We survey the state-of-the-art algorithms for computing the Banzhaf and Shapley-Shubik indices and point out how these approaches carry over to related power indices. Within a unified framework, we present new efficient algorithms for the Public Good index and a recently proposed power index based on minimal winning coalitions of smallest size, as well as a very first method for computing Johnston indices for weighted voting games efficiently. We introduce a software package providing fast C++ implementations of all the power indices mentioned in this article, discuss computing times, as well as storage requirements.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"22 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78250051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classical methods of Data Envelopment Analysis operate by measuring the efficiency of decision-making units (DMUs) compared to similar units, without taking their internal structure into account. However, some DMUs consist of two stages, with the first stage producing an intermediate product, which is then consumed in the second stage to produce the final output. The efficiency of this type of DMU is often measured using two-stage Network Data Envelopment Analysis. In real world, most data are vague; therefore the inputs and outputs of systems with vagueness data create uncertainty challenges for DMUs. As a result, when uncertainty appears, intuitionistic fuzzy sets can show more information than classical fuzzy sets. This paper presents a model of two-stage Network Data Envelopment Analysis based on intuitionistic fuzzy data, which measures the efficiency of the first and second stages of each DMU, and ultimately the overall efficiency measures based on the stage efficiencies.
{"title":"A compositional approach to two-stage Data Envelopment Analysis in intuitionistic fuzzy environment","authors":"Nafiseh Javaherian, A. Hamzehee, H. S. Tooranloo","doi":"10.37190/ORD210102","DOIUrl":"https://doi.org/10.37190/ORD210102","url":null,"abstract":"Classical methods of Data Envelopment Analysis operate by measuring the efficiency of decision-making units (DMUs) compared to similar units, without taking their internal structure into account. However, some DMUs consist of two stages, with the first stage producing an intermediate product, which is then consumed in the second stage to produce the final output. The efficiency of this type of DMU is often measured using two-stage Network Data Envelopment Analysis. In real world, most data are vague; therefore the inputs and outputs of systems with vagueness data create uncertainty challenges for DMUs. As a result, when uncertainty appears, intuitionistic fuzzy sets can show more information than classical fuzzy sets. This paper presents a model of two-stage Network Data Envelopment Analysis based on intuitionistic fuzzy data, which measures the efficiency of the first and second stages of each DMU, and ultimately the overall efficiency measures based on the stage efficiencies.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"127 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75816845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aims to model a planning process of handling outbound deliveries from a set of geographically dispersed warehouses. The model incorporates parameters observed in real life supply chains and allows simulation of various variants of process supporting decision making of current shipment management, as well as strategic planning of distribution network. A heuristic algorithm that can be used for planning of source warehouses for shipments is proposed. The model is built and tested on real business data and its performance proves to be better than the one currently used by reference company.
{"title":"Dynamic setting of shipping points in logistics systems with multiple heterogeneous warehousesorecasting the confidence interval of efficiency in fuzzy DEA","authors":"Dobromir Herzog","doi":"10.37190/ord210204","DOIUrl":"https://doi.org/10.37190/ord210204","url":null,"abstract":"This study aims to model a planning process of handling outbound deliveries from a set of geographically dispersed warehouses. The model incorporates parameters observed in real life supply chains and allows simulation of various variants of process supporting decision making of current shipment management, as well as strategic planning of distribution network. A heuristic algorithm that can be used for planning of source warehouses for shipments is proposed. The model is built and tested on real business data and its performance proves to be better than the one currently used by reference company.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"45 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87895320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forecasts of economic processes can be determined using various methods, and each of them has its own characteristics and is based on specific assumptions. In case of agriculture, forecasting is an essential element of efficient management of the entire farming process. The pork sector is one of the main agricultural sectors in the world. Pork consumption and supply are the highest among all types of meat, and Poland belongs to the group of large producers. The article analyses the price formation of class E pork, expressed in Euro per 100 kg of carcass, recorded from May 2004 to December 2019. The data comes from the Agri-food data portal. A creeping trend model with segments of linear trends of various lengths and the methodology of building ARIMA models are used to forecast these prices. The accuracy of forecasts is verified by forecasting ex post and ex ante errors, graphical analysis, and backcasting analysis. The study shows that both methods can be used in the prediction of pork prices.
{"title":"Prediction of pork meat prices by selected methods as an element supporting the decision-making process","authors":"Monika Zielińska-Sitkiewicz, M. Chrzanowska","doi":"10.37190/ord210307","DOIUrl":"https://doi.org/10.37190/ord210307","url":null,"abstract":"Forecasts of economic processes can be determined using various methods, and each of them has its own characteristics and is based on specific assumptions. In case of agriculture, forecasting is an essential element of efficient management of the entire farming process. The pork sector is one of the main agricultural sectors in the world. Pork consumption and supply are the highest among all types of meat, and Poland belongs to the group of large producers. The article analyses the price formation of class E pork, expressed in Euro per 100 kg of carcass, recorded from May 2004 to December 2019. The data comes from the Agri-food data portal. A creeping trend model with segments of linear trends of various lengths and the methodology of building ARIMA models are used to forecast these prices. The accuracy of forecasts is verified by forecasting ex post and ex ante errors, graphical analysis, and backcasting analysis. The study shows that both methods can be used in the prediction of pork prices.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"86 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75979837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensitivity analysis of parameters is usually more important than the optimal solution when it comes to linear programming. Nevertheless, in the analysis of traditional sensitivities for a coefficient, a range of changes is found to maintain the optimal solution. These changes can be functional constraints in the coefficients, such as good values or technical coefficients, of the objective function. When real-world problems are highly inaccurate due to limited data and limited information, the method of grey systems is used to perform the needed optimisation. Several algorithms for solving grey linear programming have been developed to entertain involved inaccuracies in the model parameters; these methods are complex and require much computational time. In this paper, the sensitivity of a series of grey linear programming problems is analysed by using the definitions and operators of grey numbers. Also, uncertainties in parameters are preserved in the solutions obtained from the sensitivity analysis. To evaluate the efficiency and importance of the developed method, an applied numerical example is solved.
{"title":"Sensitivity analysis of grey linear programming for optimisation problems","authors":"D. Darvishi, F. Pourofoghi, J. Forrest","doi":"10.37190/ord210402","DOIUrl":"https://doi.org/10.37190/ord210402","url":null,"abstract":"Sensitivity analysis of parameters is usually more important than the optimal solution when it comes to linear programming. Nevertheless, in the analysis of traditional sensitivities for a coefficient, a range of changes is found to maintain the optimal solution. These changes can be functional constraints in the coefficients, such as good values or technical coefficients, of the objective function. When real-world problems are highly inaccurate due to limited data and limited information, the method of grey systems is used to perform the needed optimisation. Several algorithms for solving grey linear programming have been developed to entertain involved inaccuracies in the model parameters; these methods are complex and require much computational time. In this paper, the sensitivity of a series of grey linear programming problems is analysed by using the definitions and operators of grey numbers. Also, uncertainties in parameters are preserved in the solutions obtained from the sensitivity analysis. To evaluate the efficiency and importance of the developed method, an applied numerical example is solved.","PeriodicalId":43244,"journal":{"name":"Operations Research and Decisions","volume":"50 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85446426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}