Pub Date : 2025-01-04DOI: 10.1016/j.ejor.2024.12.034
Fotios Petropoulos , Evangelos Spiliotis
In an era dominated by big data and machine and deep learning solutions, judgment has still an important role to play in decision making. Behavioural operations are on the rise as judgment complements automated algorithms in many practical settings. Over the years, new and exciting uses of judgment have emerged, with some providing fresh and innovative insights on algorithmic approaches. The forecasting field, in particular, has seen judgment infiltrating in several stages of the forecasting process, such as the production of purely judgmental forecasts, judgmental revisions of formal (statistical) forecasts, and as an alternative to statistical selection between forecasting models. In this paper, we take the first steps towards exploring a neglected use of judgment in forecasting: the manual selection of the parameters for forecasting models. We focus on a simple but widely-used forecasting model, the Simple Exponential Smoothing, and, through a behavioural experiment, we investigate the performance of individuals versus algorithms in selecting optimal modelling parameters under different conditions. Our results suggest that the use of judgment on the task of parameter selection could improve forecasting accuracy. However, individuals also suffer from anchoring biases.
{"title":"Judgmental selection of parameters for simple forecasting models","authors":"Fotios Petropoulos , Evangelos Spiliotis","doi":"10.1016/j.ejor.2024.12.034","DOIUrl":"10.1016/j.ejor.2024.12.034","url":null,"abstract":"<div><div>In an era dominated by big data and machine and deep learning solutions, judgment has still an important role to play in decision making. Behavioural operations are on the rise as judgment complements automated algorithms in many practical settings. Over the years, new and exciting uses of judgment have emerged, with some providing fresh and innovative insights on algorithmic approaches. The forecasting field, in particular, has seen judgment infiltrating in several stages of the forecasting process, such as the production of purely judgmental forecasts, judgmental revisions of formal (statistical) forecasts, and as an alternative to statistical selection between forecasting models. In this paper, we take the first steps towards exploring a neglected use of judgment in forecasting: the manual selection of the parameters for forecasting models. We focus on a simple but widely-used forecasting model, the Simple Exponential Smoothing, and, through a behavioural experiment, we investigate the performance of individuals versus algorithms in selecting optimal modelling parameters under different conditions. Our results suggest that the use of judgment on the task of parameter selection could improve forecasting accuracy. However, individuals also suffer from anchoring biases.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"323 3","pages":"Pages 780-794"},"PeriodicalIF":6.0,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142974907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1016/j.ejor.2024.12.029
Ibrahim Abada , Andreas Ehrenmann , Xavier Lambin
Energy communities are considered one of the pillars of the energy transition, owing to the rapid development of digital smart appliances and metering. They benefit from strong political support to accommodate their penetration in Europe. Nevertheless, the pace at which they have developed has been very slow compared with what was expected a decade ago. Many articles have revealed some of the underlying reasons, among which are social heterogeneity among participants, unfavorable local regulations, and inadequate governance. Most recently, a nascent body of research has highlighted the need to find adequate sharing rules for the benefits of community projects. Because of the complexity of these rules, the appointment of a community manager or coordinator may be necessary. This paper follows suit by providing guidance to policy makers or community managers about optimal risk-sharing schemes among members of an energy community. By modeling and simulating energy communities that invest in a rooftop photo-voltaic project and face some degree of production and remuneration risk, we find that a high level of risk aversion makes it impossible to allocate the risk in a stable way. Furthermore, we show that some communities whose members’ risk aversion is too heterogeneous cannot form successfully. Besides, even when risk can be allocated in a stable manner, we show that fair allocations are so complex that they require the intervention of a coordinator or a community manager. Finally, we analyze the advantages of developing judicious risk-sharing instruments between communities and a central entity for providing stability.
{"title":"Risk-sharing in energy communities","authors":"Ibrahim Abada , Andreas Ehrenmann , Xavier Lambin","doi":"10.1016/j.ejor.2024.12.029","DOIUrl":"10.1016/j.ejor.2024.12.029","url":null,"abstract":"<div><div>Energy communities are considered one of the pillars of the energy transition, owing to the rapid development of digital smart appliances and metering. They benefit from strong political support to accommodate their penetration in Europe. Nevertheless, the pace at which they have developed has been very slow compared with what was expected a decade ago. Many articles have revealed some of the underlying reasons, among which are social heterogeneity among participants, unfavorable local regulations, and inadequate governance. Most recently, a nascent body of research has highlighted the need to find adequate sharing rules for the benefits of community projects. Because of the complexity of these rules, the appointment of a community manager or coordinator may be necessary. This paper follows suit by providing guidance to policy makers or community managers about optimal risk-sharing schemes among members of an energy community. By modeling and simulating energy communities that invest in a rooftop photo-voltaic project and face some degree of production and remuneration risk, we find that a high level of risk aversion makes it impossible to allocate the risk in a stable way. Furthermore, we show that some communities whose members’ risk aversion is too heterogeneous cannot form successfully. Besides, even when risk can be allocated in a stable manner, we show that fair allocations are so complex that they require the intervention of a coordinator or a community manager. Finally, we analyze the advantages of developing judicious risk-sharing instruments between communities and a central entity for providing stability.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"322 3","pages":"Pages 870-888"},"PeriodicalIF":6.0,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142974909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.1016/j.ejor.2024.12.044
Ofek Lauber Bonomo , Uri Yechiali , Shlomi Reuveni
Service time fluctuations heavily affect the performance of queueing systems, causing long waiting times and backlogs. Recently, it was shown that when service times are solely determined by the server, service resetting can mitigate the deleterious effects of service time fluctuations and drastically improve queue performance (Bonomo et al., 2022). Yet, in many queueing systems, service times have two independent sources: the intrinsic server slowdown () and the jobs’ inherent size (). In these, so-called queues (Gardner et al., 2017), service resetting results in a newly drawn server slowdown while the inherent job size remains unchanged. Remarkably, resetting can be useful even then. To show this, we develop a comprehensive theory of queues with service resetting. We consider cases where the total service time is either a product or a sum of the service slowdown and the jobs’ inherent size. For both cases, we derive expressions for the total service time distribution and its mean under a generic service resetting policy. Two prevalent resetting policies are discussed in more detail. We first analyze the constant-rate (Poissonian) resetting policy and derive explicit conditions under which resetting reduces the mean service time and improves queue performance. Next, we consider the sharp (deterministic) resetting policy. While results hold regardless of the arrival process, we dedicate special attention to the -M/G/1 queue with service resetting, and obtain the distribution of the number of jobs in the system and their sojourn time. Our analysis highlights situations where service resetting can be used as an effective tool to improve the performance of queueing systems. Several examples are given to illustrate our analytical results, which are corroborated using numerical simulations.
{"title":"Queues with service resetting","authors":"Ofek Lauber Bonomo , Uri Yechiali , Shlomi Reuveni","doi":"10.1016/j.ejor.2024.12.044","DOIUrl":"10.1016/j.ejor.2024.12.044","url":null,"abstract":"<div><div>Service time fluctuations heavily affect the performance of queueing systems, causing long waiting times and backlogs. Recently, it was shown that when service times are solely determined by the server, service resetting can mitigate the deleterious effects of service time fluctuations and drastically improve queue performance (Bonomo et al., 2022). Yet, in many queueing systems, service times have two independent sources: the intrinsic server slowdown (<span><math><mi>S</mi></math></span>) and the jobs’ inherent size (<span><math><mi>X</mi></math></span>). In these, so-called <span><math><mrow><mi>S</mi><mi>&</mi><mi>X</mi></mrow></math></span> queues (Gardner et al., 2017), service resetting results in a newly drawn server slowdown while the inherent job size remains unchanged. Remarkably, resetting can be useful even then. To show this, we develop a comprehensive theory of <span><math><mrow><mi>S</mi><mi>&</mi><mi>X</mi></mrow></math></span> queues with service resetting. We consider cases where the total service time is either a product or a sum of the service slowdown and the jobs’ inherent size. For both cases, we derive expressions for the total service time distribution and its mean under a generic service resetting policy. Two prevalent resetting policies are discussed in more detail. We first analyze the constant-rate (Poissonian) resetting policy and derive explicit conditions under which resetting reduces the mean service time and improves queue performance. Next, we consider the sharp (deterministic) resetting policy. While results hold regardless of the arrival process, we dedicate special attention to the <span><math><mrow><mi>S</mi><mi>&</mi><mi>X</mi></mrow></math></span>-M/G/1 queue with service resetting, and obtain the distribution of the number of jobs in the system and their sojourn time. Our analysis highlights situations where service resetting can be used as an effective tool to improve the performance of <span><math><mrow><mi>S</mi><mi>&</mi><mi>X</mi></mrow></math></span> queueing systems. Several examples are given to illustrate our analytical results, which are corroborated using numerical simulations.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"322 3","pages":"Pages 908-919"},"PeriodicalIF":6.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejor.2024.12.043
P. Jean-Jacques Herings , Ana Mauleon , Vincent Vannetelbosch
We consider marriage problems where myopic and farsighted players interact and analyze these problems by means of the myopic-farsighted stable set. We require that coalition members are only willing to deviate if they all strictly benefit from doing so. Our first main result establishes the equivalence of myopic-farsighted stable sets based on arbitrary coalitional deviations and those based on pairwise deviations.
We are interested in the question whether the core is still the relevant solution concept when myopic and farsighted agents interact and whether more farsighted agents are able to secure more preferred core elements. For marriage problems where all players are myopic as well as those where all players are farsighted, myopic-farsighted stable sets lead to the same prediction as the core. The same result holds for -reducible marriage problems, without any assumptions on the set of farsighted agents. These results change when one side of the market is more farsighted than the other. For general marriage problems where all women are farsighted, only one core element can be part of a myopic-farsighted stable set, the woman-optimal stable matching. If the woman-optimal stable matching is dominated from the woman point of view by an individually rational matching, then no core element can be part of a myopic-farsighted stable set.
{"title":"Do stable outcomes survive in marriage problems with myopic and farsighted players?","authors":"P. Jean-Jacques Herings , Ana Mauleon , Vincent Vannetelbosch","doi":"10.1016/j.ejor.2024.12.043","DOIUrl":"10.1016/j.ejor.2024.12.043","url":null,"abstract":"<div><div>We consider marriage problems where myopic and farsighted players interact and analyze these problems by means of the myopic-farsighted stable set. We require that coalition members are only willing to deviate if they all strictly benefit from doing so. Our first main result establishes the equivalence of myopic-farsighted stable sets based on arbitrary coalitional deviations and those based on pairwise deviations.</div><div>We are interested in the question whether the core is still the relevant solution concept when myopic and farsighted agents interact and whether more farsighted agents are able to secure more preferred core elements. For marriage problems where all players are myopic as well as those where all players are farsighted, myopic-farsighted stable sets lead to the same prediction as the core. The same result holds for <span><math><mi>α</mi></math></span>-reducible marriage problems, without any assumptions on the set of farsighted agents. These results change when one side of the market is more farsighted than the other. For general marriage problems where all women are farsighted, only one core element can be part of a myopic-farsighted stable set, the woman-optimal stable matching. If the woman-optimal stable matching is dominated from the woman point of view by an individually rational matching, then no core element can be part of a myopic-farsighted stable set.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"322 2","pages":"Pages 713-724"},"PeriodicalIF":6.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ejor.2024.12.042
Weibin Han , Adrian Van Deemen
This paper deals with the problem of ranking a finite number of alternatives on the basis of a dominance relation. We firstly investigate some disadvantages of the Copeland ranking method, of the degree ratio ranking method and of the modified degree ratio ranking method which were characterized by using clone properties and classical axiomatic properties. Then, we introduce some alternative axiomatic properties and propose a new ranking method which is defined by the Copeland ratio of alternatives (i.e., the Copeland score of an alternative divided by its total degree). We show that this proposed ranking method coincides with the Copeland ranking method, the degree ratio ranking method and the modified degree ratio ranking method for abstract decision problems with complete and asymmetric dominance relations. Subsequently, we prove that this new ranking method is able to overcome the mentioned disadvantages of these ranking methods. After that, we provide a characterization for the Copeland ratio ranking method using the introduced axiomatic properties.
{"title":"The Copeland ratio ranking method for abstract decision problems","authors":"Weibin Han , Adrian Van Deemen","doi":"10.1016/j.ejor.2024.12.042","DOIUrl":"10.1016/j.ejor.2024.12.042","url":null,"abstract":"<div><div>This paper deals with the problem of ranking a finite number of alternatives on the basis of a dominance relation. We firstly investigate some disadvantages of the Copeland ranking method, of the degree ratio ranking method and of the modified degree ratio ranking method which were characterized by using clone properties and classical axiomatic properties. Then, we introduce some alternative axiomatic properties and propose a new ranking method which is defined by the Copeland ratio of alternatives (i.e., the Copeland score of an alternative divided by its total degree). We show that this proposed ranking method coincides with the Copeland ranking method, the degree ratio ranking method and the modified degree ratio ranking method for abstract decision problems with complete and asymmetric dominance relations. Subsequently, we prove that this new ranking method is able to overcome the mentioned disadvantages of these ranking methods. After that, we provide a characterization for the Copeland ratio ranking method using the introduced axiomatic properties.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"323 3","pages":"Pages 966-974"},"PeriodicalIF":6.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142929296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Capacity planning of police resources is crucial to operating an effective and robust police service. However, due to the high operational heterogeneity and variability among different calls for service, key performance estimates that link resource allocation and utilization to emergency response times are a challenging task in and of itself. In the literature, two main instruments are proposed to provide appropriate estimates: queueing models, which yield closed-form expressions for key performance characteristics under limiting assumptions, and simulation models, which seek to capture more of the real-world structure of police operations at the cost of increased computational effort. Utilizing an extensive dataset comprising over two million calls for service, we have created a discrete-event simulation tailored to capture police operations within a major metropolitan area in Germany. Our analysis involves comparing this simulation against an implementation of the widely cited multiple car dispatch queueing model by Green and Kolesar (1989) found in the literature. Our findings underscore that our simulation model yields significantly improved estimates for key performance indicators reflective of real-world scenarios. Notably, we demonstrate the consequential impact on resource allocation resulting from these enhanced estimates. The superior accuracy of our model facilitates the development of capacity plans that align more effectively with actual workloads, consequently fostering heightened security measures and cost efficiencies for society. Additionally, our study involves rectifying discrepancies in the presentation of the queueing model and highlighting three specific areas for future research.
{"title":"Simulative assessment of patrol car allocation and response time","authors":"Tobias Cors, Malte Fliedner, Knut Haase, Tobias Vlćek","doi":"10.1016/j.ejor.2024.12.035","DOIUrl":"https://doi.org/10.1016/j.ejor.2024.12.035","url":null,"abstract":"Capacity planning of police resources is crucial to operating an effective and robust police service. However, due to the high operational heterogeneity and variability among different calls for service, key performance estimates that link resource allocation and utilization to emergency response times are a challenging task in and of itself. In the literature, two main instruments are proposed to provide appropriate estimates: queueing models, which yield closed-form expressions for key performance characteristics under limiting assumptions, and simulation models, which seek to capture more of the real-world structure of police operations at the cost of increased computational effort. Utilizing an extensive dataset comprising over two million calls for service, we have created a discrete-event simulation tailored to capture police operations within a major metropolitan area in Germany. Our analysis involves comparing this simulation against an implementation of the widely cited multiple car dispatch queueing model by Green and Kolesar (1989) found in the literature. Our findings underscore that our simulation model yields significantly improved estimates for key performance indicators reflective of real-world scenarios. Notably, we demonstrate the consequential impact on resource allocation resulting from these enhanced estimates. The superior accuracy of our model facilitates the development of capacity plans that align more effectively with actual workloads, consequently fostering heightened security measures and cost efficiencies for society. Additionally, our study involves rectifying discrepancies in the presentation of the queueing model and highlighting three specific areas for future research.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"74 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1016/j.ejor.2024.12.037
Le Zhang, Shadi Sharif Azadeh, Hai Jiang
We study a class of assortment optimization problems where customers choose products according to the cross-nested logit (CNL) model and the number of products offered in the assortment cannot exceed a fixed number. Currently, no exact method exists for this NP-hard problem that can efficiently solve even small instances (e.g., 50 products with a cardinality limit of 10). In this paper, we propose an exact solution method that addresses this problem by finding the fixed point of a function through binary search. The parameterized problem at each iteration corresponds to a nonlinear binary integer programming problem, which we solve using a tailored Branch-and-Bound algorithm incorporating a novel variable-fixing mechanism, branching rule and upper bound generation strategy. Given that the computation time of the exact method can grow exponentially, we also introduce two polynomial-time heuristic algorithms with different solution strategies to handle larger instances. Numerical results demonstrate that our exact algorithm can optimally solve all test instances with up to 150 products and more than 90% of instances with up to 300 products within a one-hour time limit. Using the exact method as a benchmark, we find that the best-performing heuristic achieves optimal solutions for the majority of test instances, with an average optimality gap of 0.2%.
{"title":"Exact and heuristic algorithms for cardinality-constrained assortment optimization problem under the cross-nested logit model","authors":"Le Zhang, Shadi Sharif Azadeh, Hai Jiang","doi":"10.1016/j.ejor.2024.12.037","DOIUrl":"https://doi.org/10.1016/j.ejor.2024.12.037","url":null,"abstract":"We study a class of assortment optimization problems where customers choose products according to the cross-nested logit (CNL) model and the number of products offered in the assortment cannot exceed a fixed number. Currently, no exact method exists for this NP-hard problem that can efficiently solve even small instances (e.g., 50 products with a cardinality limit of 10). In this paper, we propose an exact solution method that addresses this problem by finding the fixed point of a function through binary search. The parameterized problem at each iteration corresponds to a nonlinear binary integer programming problem, which we solve using a tailored Branch-and-Bound algorithm incorporating a novel variable-fixing mechanism, branching rule and upper bound generation strategy. Given that the computation time of the exact method can grow exponentially, we also introduce two polynomial-time heuristic algorithms with different solution strategies to handle larger instances. Numerical results demonstrate that our exact algorithm can optimally solve all test instances with up to 150 products and more than 90% of instances with up to 300 products within a one-hour time limit. Using the exact method as a benchmark, we find that the best-performing heuristic achieves optimal solutions for the majority of test instances, with an average optimality gap of 0.2%.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"24 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1016/j.ejor.2024.12.041
Anne Meyer , Timo Gschwind , Boris Amberg , Dominik Colling
In intralogistics and manufacturing, autonomous mobile robots (AMRs) are usually electrically powered and recharged by battery swapping or induction. We investigate AMR route planning in these settings by studying different variants of the electric vehicle routing problem with due dates (EVRPD). We consider three common recharging strategies: battery swapping, inductive recharging with full recharges, and inductive recharging with partial recharges. Moreover, we consider two different objective functions: the standard objective of minimizing the total distance traveled and the minimization of the total completion times of transport jobs. The latter is of particular interest in intralogistics, where time aspects are of crucial importance and the earliest possible completion of jobs often has priority. In this context, recharging decisions also play an essential role. For solving the EVRPD variants, we propose exact branch-price-and-cut algorithms that rely on ad-hoc labeling algorithms tailored to the respective variants. We perform an extensive computational study to generate managerial insights on the AMR route planning problem and to assess the performance of our solution approach. The experiments are based on newly introduced instances featuring typical characteristics of AMR applications in intralogistics and manufacturing and on standard benchmark instances from the literature. The detailed analysis of our results reveals that inductive recharging with partial recharges is competitive with battery swapping, while using a full-recharges strategy has considerable drawbacks in an AMR setup.
{"title":"Exact algorithms for routing electric autonomous mobile robots in intralogistics","authors":"Anne Meyer , Timo Gschwind , Boris Amberg , Dominik Colling","doi":"10.1016/j.ejor.2024.12.041","DOIUrl":"10.1016/j.ejor.2024.12.041","url":null,"abstract":"<div><div>In intralogistics and manufacturing, autonomous mobile robots (AMRs) are usually electrically powered and recharged by battery swapping or induction. We investigate AMR route planning in these settings by studying different variants of the electric vehicle routing problem with due dates (EVRPD). We consider three common recharging strategies: battery swapping, inductive recharging with full recharges, and inductive recharging with partial recharges. Moreover, we consider two different objective functions: the standard objective of minimizing the total distance traveled and the minimization of the total completion times of transport jobs. The latter is of particular interest in intralogistics, where time aspects are of crucial importance and the earliest possible completion of jobs often has priority. In this context, recharging decisions also play an essential role. For solving the EVRPD variants, we propose exact branch-price-and-cut algorithms that rely on ad-hoc labeling algorithms tailored to the respective variants. We perform an extensive computational study to generate managerial insights on the AMR route planning problem and to assess the performance of our solution approach. The experiments are based on newly introduced instances featuring typical characteristics of AMR applications in intralogistics and manufacturing and on standard benchmark instances from the literature. The detailed analysis of our results reveals that inductive recharging with partial recharges is competitive with battery swapping, while using a full-recharges strategy has considerable drawbacks in an AMR setup.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"323 3","pages":"Pages 830-851"},"PeriodicalIF":6.0,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30DOI: 10.1016/j.ejor.2024.12.026
Walter Distaso , Francesco Roccazzella , Frédéric Vrins
We investigate the determinants of losses given default (LGD) in consumer credit. Utilizing a unique dataset encompassing over 6 million observations of Italian consumer credit over a long time span, we find that macroeconomic and social (MS) variables significantly enhance the forecasting performance at both individual and portfolio levels, improving R2 by up to 10 percentage points. Our findings are robust across various model specifications. Non-linear forecast combination schemes employing neural networks consistently rank among the top performers in terms of mean absolute error, RMSE, R2, and model confidence sets in every tested scenario. Notably, every model that belongs to the superior set systematically includes MS variables. The relationship between expected LGD and macro predictors, as revealed by accumulated local effects plots and Shapley values, supports the intuition that lower real activity, a rising cost-of-debt to GDP ratio, and heightened economic uncertainty are associated with higher LGD for consumer credit. Our results on the influence of MS variables complement and slightly differ from those of related papers. These discrepancies can be attributed to the comprehensive nature of our database – spanning broader dimensions in space, time, sectors, and types of consumer credit – the variety of models utilized, and the analyses conducted.
{"title":"Business cycle and realized losses in the consumer credit industry","authors":"Walter Distaso , Francesco Roccazzella , Frédéric Vrins","doi":"10.1016/j.ejor.2024.12.026","DOIUrl":"10.1016/j.ejor.2024.12.026","url":null,"abstract":"<div><div>We investigate the determinants of losses given default (LGD) in consumer credit. Utilizing a unique dataset encompassing over 6 million observations of Italian consumer credit over a long time span, we find that macroeconomic and social (MS) variables significantly enhance the forecasting performance at both individual and portfolio levels, improving R<sup>2</sup> by up to 10 percentage points. Our findings are robust across various model specifications. Non-linear forecast combination schemes employing neural networks consistently rank among the top performers in terms of mean absolute error, RMSE, R<sup>2</sup>, and model confidence sets in every tested scenario. Notably, every model that belongs to the superior set systematically includes MS variables. The relationship between expected LGD and macro predictors, as revealed by accumulated local effects plots and Shapley values, supports the intuition that lower real activity, a rising cost-of-debt to GDP ratio, and heightened economic uncertainty are associated with higher LGD for consumer credit. Our results on the influence of MS variables complement and slightly differ from those of related papers. These discrepancies can be attributed to the comprehensive nature of our database – spanning broader dimensions in space, time, sectors, and types of consumer credit – the variety of models utilized, and the analyses conducted.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"323 3","pages":"Pages 1024-1039"},"PeriodicalIF":6.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142974908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-29DOI: 10.1016/j.ejor.2024.12.050
Juan José Díaz-Hernández , David-José Cova-Alonso , Eduardo Martínez-Budría
This paper proposes a model that makes two contributions to the measurement of technical efficiency under a technology with variable returns to scale. First, the criteria for identifying an optimal benchmark are not limited to technical dominance and Pareto efficiency, but also include maximum average productivity, defined as the ratio between a weighted linear aggregate of outputs and inputs.
Second, the paper contributes a conceptual basis for correcting the shadow prices of inputs and outputs to reflect the influence of returns to scale. Debreu's loss function is used to value inefficiency as the difference between the virtual input and output using the shadow prices of the supporting hyperplane at the optimal reference. The efficiency score is a virtual profitability index with endogenous shadow prices that reflect the valuation of inputs and outputs with a microeconomic rationale, i.e., it is not a distance measure based on aggregation with exogenous weights of the difference between observed and optimal quantities.
Two further results follow from these contributions. First, the radial input-output orientation to maximise productivity is endogenous. It is conditioned by the nature of the returns to scale. Second, the efficiency measure based on the loss function exhibits the desirable properties in a radial context, including the indication property, because the efficiency score incorporates non-radial slack.
{"title":"Measuring technical efficiency under variable returns to scale using Debreu's loss function","authors":"Juan José Díaz-Hernández , David-José Cova-Alonso , Eduardo Martínez-Budría","doi":"10.1016/j.ejor.2024.12.050","DOIUrl":"10.1016/j.ejor.2024.12.050","url":null,"abstract":"<div><div>This paper proposes a model that makes two contributions to the measurement of technical efficiency under a technology with variable returns to scale. First, the criteria for identifying an optimal benchmark are not limited to technical dominance and Pareto efficiency, but also include maximum average productivity, defined as the ratio between a weighted linear aggregate of outputs and inputs.</div><div>Second, the paper contributes a conceptual basis for correcting the shadow prices of inputs and outputs to reflect the influence of returns to scale. Debreu's loss function is used to value inefficiency as the difference between the virtual input and output using the shadow prices of the supporting hyperplane at the optimal reference. The efficiency score is a virtual profitability index with endogenous shadow prices that reflect the valuation of inputs and outputs with a microeconomic rationale, i.e., it is not a distance measure based on aggregation with exogenous weights of the difference between observed and optimal quantities.</div><div>Two further results follow from these contributions. First, the radial input-output orientation to maximise productivity is endogenous. It is conditioned by the nature of the returns to scale. Second, the efficiency measure based on the loss function exhibits the desirable properties in a radial context, including the indication property, because the efficiency score incorporates non-radial slack.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"323 3","pages":"Pages 975-987"},"PeriodicalIF":6.0,"publicationDate":"2024-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142929298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}