A location‐allocation problem faced by a company that aims to locate warehouses to supply products to a set of customers is addressed in this paper. The company's objective is to minimize the total cost of locating the warehouses and the cost due to inventory policies. However, these inventory decisions are made by a different decision‐maker. In other words, once the company makes the location decisions, the decision‐maker associated with each warehouse must determine its own order quantity. Warehouses are allowed to have a certain maximum number of backorders, which represents an extra cost for them. This situation can be modeled as a bilevel programming problem, where the upper level is associated with the company that needs to minimize the costs related to location‐allocation and the total orders of each warehouse. Each warehouse is associated with an independent lower level, in which a warehouse manager aims to minimize the total inventory cost. The bilevel problem results in a single‐objective upper‐level problem with non‐linear, multiple independent lower‐level problems, making it generally challenging to find an optimal solution. A population‐based metaheuristic under the Brain Storm Optimization algorithm scheme is proposed. To solve each non‐linear problem associated with the lower level, the Lagrangian method is applied. Both decision levels are solved in a nested manner, leading to obtaining bilevel feasible solutions. To validate the effectiveness of the proposed algorithm, an enumerative algorithm is implemented. A set of benchmark instances has been considered to conduct computational experiments. Results show that optimality is achieved by the proposed algorithm for small‐sized instances. In the case of larger‐sized instances, the proposed algorithm demonstrates the same efficiency and consistent results. Finally, interesting managerial insights deduced from the computational experimentation and some proposals for future research directions are included.
{"title":"A warehouse location‐allocation bilevel problem that considers inventory policies","authors":"José‐Fernando Camacho‐Vallejo, Dámaris Dávila, Leopoldo Eduardo Cárdenas‐Barrón","doi":"10.1002/net.22235","DOIUrl":"https://doi.org/10.1002/net.22235","url":null,"abstract":"A location‐allocation problem faced by a company that aims to locate warehouses to supply products to a set of customers is addressed in this paper. The company's objective is to minimize the total cost of locating the warehouses and the cost due to inventory policies. However, these inventory decisions are made by a different decision‐maker. In other words, once the company makes the location decisions, the decision‐maker associated with each warehouse must determine its own order quantity. Warehouses are allowed to have a certain maximum number of backorders, which represents an extra cost for them. This situation can be modeled as a bilevel programming problem, where the upper level is associated with the company that needs to minimize the costs related to location‐allocation and the total orders of each warehouse. Each warehouse is associated with an independent lower level, in which a warehouse manager aims to minimize the total inventory cost. The bilevel problem results in a single‐objective upper‐level problem with non‐linear, multiple independent lower‐level problems, making it generally challenging to find an optimal solution. A population‐based metaheuristic under the Brain Storm Optimization algorithm scheme is proposed. To solve each non‐linear problem associated with the lower level, the Lagrangian method is applied. Both decision levels are solved in a nested manner, leading to obtaining bilevel feasible solutions. To validate the effectiveness of the proposed algorithm, an enumerative algorithm is implemented. A set of benchmark instances has been considered to conduct computational experiments. Results show that optimality is achieved by the proposed algorithm for small‐sized instances. In the case of larger‐sized instances, the proposed algorithm demonstrates the same efficiency and consistent results. Finally, interesting managerial insights deduced from the computational experimentation and some proposals for future research directions are included.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"103 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The network design problem with vulnerability constraints and probabilistic edge reliability (NDPVC‐PER) is an extension of the NDPVC obtained by additionally considering edge reliability. We consider the design of a telecommunication network in which every origin‐destination pair is connected by a hop‐constrained primal path, and by a hop‐constrained backup path when certain edges in the network fail. The edge failures occur with respect to their reliability, and the network is designed by considering a minimum reliability level. Therefore, a hop‐constrained backup path must be built by considering all simultaneous edge failures that have a certain probability of realization. While there exist models to solve the NDPVC without enumerating all edge subsets, edge reliability cannot be dealt with by applying the techniques applied to the NDPVC. Therefore, we develop models based on a new concept of resilient length‐bounded cuts, and solve the NDPVC‐PER without edge set enumerations. We perform extensive testing of the model to determine the best performing settings, and demonstrate the computational efficiency of the developed model. Our findings on these instances show that, in the dataset considered in this study, increasing the reliability level from 90% to 95% increases the average cost only by 12.4%, while increasing it from 95% to 99% level yields a cost increase of 93.9%.
{"title":"Network design with vulnerability constraints and probabilistic edge reliability","authors":"Okan Arslan, Gilbert Laporte","doi":"10.1002/net.22222","DOIUrl":"https://doi.org/10.1002/net.22222","url":null,"abstract":"The network design problem with vulnerability constraints and probabilistic edge reliability (NDPVC‐PER) is an extension of the NDPVC obtained by additionally considering edge reliability. We consider the design of a telecommunication network in which every origin‐destination pair is connected by a hop‐constrained primal path, and by a hop‐constrained backup path when certain edges in the network fail. The edge failures occur with respect to their reliability, and the network is designed by considering a minimum reliability level. Therefore, a hop‐constrained backup path must be built by considering all simultaneous edge failures that have a certain probability of realization. While there exist models to solve the NDPVC without enumerating all edge subsets, edge reliability cannot be dealt with by applying the techniques applied to the NDPVC. Therefore, we develop models based on a new concept of <jats:italic>resilient length‐bounded cuts</jats:italic>, and solve the NDPVC‐PER without edge set enumerations. We perform extensive testing of the model to determine the best performing settings, and demonstrate the computational efficiency of the developed model. Our findings on these instances show that, in the dataset considered in this study, increasing the reliability level from 90% to 95% increases the average cost only by 12.4%, while increasing it from 95% to 99% level yields a cost increase of 93.9%.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"31 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina Eickhoff, S. Thomas McCormick, Britta Peis, Niklas Rieken, Laura Vargas Koch
We consider a market where a set of objects is sold to a set of buyers, each equipped with a valuation function for the objects. The goal of the auctioneer is to determine reasonable prices together with a stable allocation. One definition of “reasonable” and “stable” is a Walrasian equilibrium, which is a tuple consisting of a price vector together with an allocation satisfying the following desirable properties: (i) the allocation is market-clearing in the sense that as much as possible is sold, and (ii) the allocation is stable in the sense that every buyer ends up with an optimal set with respect to the given prices. Moreover, “buyer-optimal” means that the prices are smallest possible among all Walrasian prices. In this paper, we present a combinatorial network flow algorithm to compute buyer-optimal Walrasian prices in a multi-unit matching market with truncated additive valuation functions. The algorithm can be seen as a generalization of the classical housing market auction and mimics the very natural procedure of an ascending auction. We use our structural insights to prove monotonicity of the buyer-optimal Walrasian prices with respect to changes in supply or demand.
{"title":"A flow-based ascending auction to compute buyer-optimal Walrasian prices","authors":"Katharina Eickhoff, S. Thomas McCormick, Britta Peis, Niklas Rieken, Laura Vargas Koch","doi":"10.1002/net.22218","DOIUrl":"https://doi.org/10.1002/net.22218","url":null,"abstract":"We consider a market where a set of objects is sold to a set of buyers, each equipped with a valuation function for the objects. The goal of the auctioneer is to determine reasonable prices together with a stable allocation. One definition of “reasonable” and “stable” is a Walrasian equilibrium, which is a tuple consisting of a price vector together with an allocation satisfying the following desirable properties: (i) the allocation is market-clearing in the sense that as much as possible is sold, and (ii) the allocation is stable in the sense that every buyer ends up with an optimal set with respect to the given prices. Moreover, “buyer-optimal” means that the prices are smallest possible among all Walrasian prices. In this paper, we present a combinatorial network flow algorithm to compute buyer-optimal Walrasian prices in a multi-unit matching market with truncated additive valuation functions. The algorithm can be seen as a generalization of the classical housing market auction and mimics the very natural procedure of an ascending auction. We use our structural insights to prove monotonicity of the buyer-optimal Walrasian prices with respect to changes in supply or demand.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"9 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Celina M. H. de Figueiredo, Raul Lopes, Alexsander A. de Melo, Ana Silva
Chordal graphs are the intersection graphs of subtrees of a tree, while interval graphs of subpaths of a path. Undirected path graphs, directed path graphs and rooted directed path graphs are intermediate graph classes, defined, respectively, as the intersection graphs of paths of a tree, of directed paths of an oriented tree, and of directed paths of an out branching. All of these path graphs have vertex leafage 2. DominatingSet, ConnectedDominatingSet, and Steinertree problems are ‐hard parameterized by the size of the solution on chordal graphs, ‐complete on undirected path graphs, and polynomial‐time solvable on rooted directed path graphs, and hence also on interval graphs. We further investigate the (parameterized) complexity of all these problems when constrained to chordal graphs, taking the vertex leafage and the aforementioned classes into consideration. We prove that DominatingSet, ConnectedDominatingSet, and Steinertree are on chordal graphs when parameterized by the size of the solution plus the vertex leafage, and that WeightedConnectedDominatingSet is polynomial‐time solvable on strongly chordal graphs. We also introduce a new subclass of undirected path graphs, which we call in–out rooted directed path graphs, as the intersection graphs of directed paths of an in–out branching. We prove that DominatingSet, ConnectedDominatingSet, and Steinertree are solvable in polynomial time on this class, generalizing the polynomiality for rooted directed path graphs proved by Booth and Johnson (SIAM J. Comput. 11 (1982), 191‐199.) and by White et al. (Networks 15 (1985), 109‐124.).
弦图是树的子树的交集图,而区间图是路径的子路径的交集图。无向路径图、有向路径图和有根有向路径图是中间图类,分别定义为树的路径交图、定向树的有向路径交图和外分支的有向路径交图。所有这些路径图都有顶点叶序 2。支配集问题、连接支配集问题和斯坦纳树问题在弦图上是以解的大小为参数的困难问题,在无向路径图上是不完全问题,在有根有向路径图上是多项式时间可解问题,因此在区间图上也是如此。考虑到顶点叶片和上述类别,我们进一步研究了所有这些问题在限制于弦图情况下的(参数化)复杂性。我们证明,当以解的大小和顶点叶数为参数时,支配集、连通支配集和斯坦纳树都在弦图上,而加权连通支配集在强弦图上是多项式时间可解的。我们还引入了无向路径图的一个新子类,我们称之为有向路径图(in-out rooted directed path graphs),即in-out分支的有向路径的交集图。我们证明,在这一类图上,多项式时间内可求解支配集、连通支配集和斯坦纳树,从而推广了布斯和约翰逊证明的有根有向径图的多项式性(SIAM J. Comput.11 (1982),191-199。)和 White 等人(Networks 15 (1985),109-124。)
{"title":"Parameterized algorithms for Steiner tree and (connected) dominating set on path graphs","authors":"Celina M. H. de Figueiredo, Raul Lopes, Alexsander A. de Melo, Ana Silva","doi":"10.1002/net.22220","DOIUrl":"https://doi.org/10.1002/net.22220","url":null,"abstract":"Chordal graphs are the intersection graphs of subtrees of a tree, while interval graphs of subpaths of a path. Undirected path graphs, directed path graphs and rooted directed path graphs are intermediate graph classes, defined, respectively, as the intersection graphs of paths of a tree, of directed paths of an oriented tree, and of directed paths of an out branching. All of these path graphs have <jats:italic>vertex leafage 2</jats:italic>. <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc>, <jats:sc>Connected</jats:sc> <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc>, and <jats:sc>Steiner</jats:sc> <jats:sc>tree</jats:sc> problems are ‐hard parameterized by the size of the solution on chordal graphs, ‐complete on undirected path graphs, and polynomial‐time solvable on rooted directed path graphs, and hence also on interval graphs. We further investigate the (parameterized) complexity of all these problems when constrained to chordal graphs, taking the vertex leafage and the aforementioned classes into consideration. We prove that <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc>, <jats:sc>Connected</jats:sc> <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc>, and <jats:sc>Steiner</jats:sc> <jats:sc>tree</jats:sc> are on chordal graphs when parameterized by the size of the solution plus the vertex leafage, and that <jats:sc>Weighted</jats:sc> <jats:sc>Connected</jats:sc> <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc> is polynomial‐time solvable on strongly chordal graphs. We also introduce a new subclass of undirected path graphs, which we call in–out rooted directed path graphs, as the intersection graphs of directed paths of an in–out branching. We prove that <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc>, <jats:sc>Connected</jats:sc> <jats:sc>Dominating</jats:sc> <jats:sc>Set</jats:sc>, and <jats:sc>Steiner</jats:sc> <jats:sc>tree</jats:sc> are solvable in polynomial time on this class, generalizing the polynomiality for rooted directed path graphs proved by Booth and Johnson (SIAM J. Comput. 11 (1982), 191‐199.) and by White et al. (Networks 15 (1985), 109‐124.).","PeriodicalId":54734,"journal":{"name":"Networks","volume":"28 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140603302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recycling centers sort and process collected waste in the interest of the environment, but also lead to damaging climate effects via released emissions and pollutants in their operation. Consequently, governments are closing such centers to fulfill climate and carbon neutrality goals. However, such closures risk populations being forced to travel further to facilities that collect waste, and can cause an unfair burden on the remaining open centers, thereby reducing participation in recycling. Using a facility location optimization model and mobility survey data within the state of Bavaria in Germany, we show how selective closures of these centers can still lead to high levels of recycling access. Our analysis ensures that even when 20% of facilities are closed smartly, the median travel distance by residents to their assigned recycling center increases by only 450 m. Additionally, we find Bavaria suffers from disparity in recycling patterns in rural and urban regions, both in terms of motivation to recycle and the locations of the facilities. We promote a policy that favors retention of recycling centers in rural regions by reserving 75% of open facilities to be in rural areas, while selectively closing facilities in urban regions, to remove these regional differences. Success of recycling campaigns depends on public perception of closures of such facilities and also on their ease of access. As policymakers gradually implement further closures, such data‐driven strategies can assist in being more transparent to the public thereby increasing the willingness to participate in such recycling programs.
{"title":"Selectively closing recycling centers in Bavaria: Reforming waste‐management policy to reduce disparity","authors":"Malena Schmidt, Bismark Singh","doi":"10.1002/net.22221","DOIUrl":"https://doi.org/10.1002/net.22221","url":null,"abstract":"Recycling centers sort and process collected waste in the interest of the environment, but also lead to damaging climate effects via released emissions and pollutants in their operation. Consequently, governments are closing such centers to fulfill climate and carbon neutrality goals. However, such closures risk populations being forced to travel further to facilities that collect waste, and can cause an unfair burden on the remaining open centers, thereby reducing participation in recycling. Using a facility location optimization model and mobility survey data within the state of Bavaria in Germany, we show how selective closures of these centers can still lead to high levels of recycling access. Our analysis ensures that even when 20% of facilities are closed smartly, the median travel distance by residents to their assigned recycling center increases by only 450 m. Additionally, we find Bavaria suffers from disparity in recycling patterns in rural and urban regions, both in terms of motivation to recycle and the locations of the facilities. We promote a policy that favors retention of recycling centers in rural regions by reserving 75% of open facilities to be in rural areas, while selectively closing facilities in urban regions, to remove these regional differences. Success of recycling campaigns depends on public perception of closures of such facilities and also on their ease of access. As policymakers gradually implement further closures, such data‐driven strategies can assist in being more transparent to the public thereby increasing the willingness to participate in such recycling programs.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"27 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre‐Luc Parent, Margarida Carvalho, Miguel F. Anjos, Ribal Atallah
With the increasing effects of climate change, the urgency to step away from fossil fuels is greater than ever before. Electric vehicles (EVs) are one way to diminish these effects, but their widespread adoption is often limited by the insufficient availability of charging stations. In this work, our goal is to expand the infrastructure of EV charging stations, in order to provide a better quality of service in terms of user satisfaction (and availability of charging stations). Specifically, our focus is directed towards urban areas. We first propose a model for the assignment of EV charging demand to stations, framing it as a maximum flow problem. This model is the basis for the evaluation of user satisfaction with a given charging infrastructure. Secondly, we incorporate the maximum flow model into a mixed‐integer linear program, where decisions on the opening of new stations and on the expansion of their capacity through additional outlets is accounted for. We showcase our methodology for the city of Montreal, demonstrating the scalability of our approach to handle real‐world scenarios. We conclude that considering both spacial and temporal variations in charging demand is meaningful when solving realistic instances.
{"title":"Maximum flow‐based formulation for the optimal location of electric vehicle charging stations","authors":"Pierre‐Luc Parent, Margarida Carvalho, Miguel F. Anjos, Ribal Atallah","doi":"10.1002/net.22219","DOIUrl":"https://doi.org/10.1002/net.22219","url":null,"abstract":"With the increasing effects of climate change, the urgency to step away from fossil fuels is greater than ever before. Electric vehicles (EVs) are one way to diminish these effects, but their widespread adoption is often limited by the insufficient availability of charging stations. In this work, our goal is to expand the infrastructure of EV charging stations, in order to provide a better quality of service in terms of user satisfaction (and availability of charging stations). Specifically, our focus is directed towards urban areas. We first propose a model for the assignment of EV charging demand to stations, framing it as a maximum flow problem. This model is the basis for the evaluation of user satisfaction with a given charging infrastructure. Secondly, we incorporate the maximum flow model into a mixed‐integer linear program, where decisions on the opening of new stations and on the expansion of their capacity through additional outlets is accounted for. We showcase our methodology for the city of Montreal, demonstrating the scalability of our approach to handle real‐world scenarios. We conclude that considering both spacial and temporal variations in charging demand is meaningful when solving realistic instances.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"440 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140603317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steffen Elting, Jan Fabian Ehmke, Margaretha Gansterer
Attended home deliveries (AHDs) are characterized by dynamic customer acceptance and narrow customer-specific delivery time windows. Both impede efficient routing and thus make AHDs very costly. In this article, we explore how established horizontal collaborative transportation planning methods can be adapted to render AHDs more efficient. The general idea is to enable request reallocation between multiple collaborating carriers after the order capture phase. We use an established centralized reallocation framework that allows participating carriers to submit delivery requests for reallocation. We extend this framework for AHD specifics such as the dynamic arrival of customer requests and information about delivery time windows. Using realistic instances based on the city of Vienna, we quantify the collaboration savings by solving the underlying routing and reallocation problems. We show that narrow time windows can lower the savings obtainable by the reallocation by up to 15%. Therefore, we suggest enhancing the decision processes of request selection and request bundling using information about delivery time windows. Our findings demonstrate that adapting methods of request selection and bundle generation to environments with narrow time windows can increase collaboration savings by up to 25% and 35%, respectively in comparison to methods that work well only when no time windows are imposed.
上门送货(AHD)的特点是动态的客户接受度和狭窄的客户特定送货时间窗口。这两点都阻碍了高效的路由选择,从而使上门送货的成本非常高。在本文中,我们将探讨如何调整已有的水平协作运输规划方法,以提高上门送货的效率。总体思路是在订单捕获阶段后,在多个协作承运商之间实现请求重新分配。我们使用已建立的集中重新分配框架,允许参与的承运商提交交付请求进行重新分配。我们针对 AHD 的具体情况(如客户请求的动态到达和有关交付时间窗口的信息)对这一框架进行了扩展。通过使用基于维也纳市的实际实例,我们量化了解决底层路由和重新分配问题所节省的协作成本。我们发现,狭窄的时间窗口会使重新分配节省的成本最多降低 15%。因此,我们建议利用有关交付时间窗口的信息来改进请求选择和请求捆绑的决策过程。我们的研究结果表明,与只在无时间窗口的情况下才有效的方法相比,在时间窗口较窄的环境下调整请求选择和捆绑生成的方法,可分别提高协作节省率高达 25% 和 35%。
{"title":"Collaborative transportation for attended home deliveries","authors":"Steffen Elting, Jan Fabian Ehmke, Margaretha Gansterer","doi":"10.1002/net.22216","DOIUrl":"https://doi.org/10.1002/net.22216","url":null,"abstract":"Attended home deliveries (AHDs) are characterized by dynamic customer acceptance and narrow customer-specific delivery time windows. Both impede efficient routing and thus make AHDs very costly. In this article, we explore how established horizontal collaborative transportation planning methods can be adapted to render AHDs more efficient. The general idea is to enable request reallocation between multiple collaborating carriers after the order capture phase. We use an established centralized reallocation framework that allows participating carriers to submit delivery requests for reallocation. We extend this framework for AHD specifics such as the dynamic arrival of customer requests and information about delivery time windows. Using realistic instances based on the city of Vienna, we quantify the collaboration savings by solving the underlying routing and reallocation problems. We show that narrow time windows can lower the savings obtainable by the reallocation by up to 15%. Therefore, we suggest enhancing the decision processes of request selection and request bundling using information about delivery time windows. Our findings demonstrate that adapting methods of request selection and bundle generation to environments with narrow time windows can increase collaboration savings by up to 25% and 35%, respectively in comparison to methods that work well only when no time windows are imposed.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"28 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140325112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esther Bischoff, Saskia Kohn, Daniela Hahn, Christian Braun, Simon Rothfuß, Sören Hohmann
Providing high quality solutions is crucial when solving NP-hard time-extended multi-robot task allocation (MRTA) problems. Reoptimization, that is, the concept of making use of a known solution to an optimization problem instance when the solution to a similar problem instance is sought, is a promising and rather new research field in this application domain. However, so far no approximative time-extended MRTA solution approaches exist for which guarantees on the resulting solution's quality can be given. We investigate the reoptimization problems of inserting as well as deleting a task to/from a time-extended MRTA problem instance. For both problems, we can give performance guarantees in the form of an upper bound of 2 on the resulting approximation ratio for all heuristics fulfilling a mild assumption. We furthermore introduce specific solution heuristics and prove that smaller and tight upper bounds on the approximation ratio can be given for these heuristics if only temporal unconstrained tasks and homogeneous groups of robots are considered. A conclusory evaluation of the reoptimization heuristic demonstrates a near-to-optimal performance in application.
{"title":"Heuristic reoptimization of time-extended multi-robot task allocation problems","authors":"Esther Bischoff, Saskia Kohn, Daniela Hahn, Christian Braun, Simon Rothfuß, Sören Hohmann","doi":"10.1002/net.22217","DOIUrl":"https://doi.org/10.1002/net.22217","url":null,"abstract":"Providing high quality solutions is crucial when solving NP-hard time-extended multi-robot task allocation (MRTA) problems. Reoptimization, that is, the concept of making use of a known solution to an optimization problem instance when the solution to a similar problem instance is sought, is a promising and rather new research field in this application domain. However, so far no approximative time-extended MRTA solution approaches exist for which guarantees on the resulting solution's quality can be given. We investigate the reoptimization problems of inserting as well as deleting a task to/from a time-extended MRTA problem instance. For both problems, we can give performance guarantees in the form of an upper bound of 2 on the resulting approximation ratio for all heuristics fulfilling a mild assumption. We furthermore introduce specific solution heuristics and prove that smaller and tight upper bounds on the approximation ratio can be given for these heuristics if only temporal unconstrained tasks and homogeneous groups of robots are considered. A conclusory evaluation of the reoptimization heuristic demonstrates a near-to-optimal performance in application.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"115 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140107676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason I. Brown, Theodore Kolokolnikov, Robert E. Kooij
We introduce two new methods for approximating the all‐terminal reliability of undirected graphs. First, we introduce an edge removal process: remove edges at random, one at a time, until the graph becomes disconnected. We show that the expected number of edges thus removed is equal to , where is the number of edges in the graph, and is the average of the all‐terminal reliability polynomial. Based on this process, we propose a Monte‐Carlo algorithm to quickly estimate the graph reliability (whose exact computation is NP‐hard). Moreover, we show that the distribution of the edge removal process can be used to quickly approximate the reliability polynomial. We then propose increasingly accurate asymptotics for graph reliability based solely on degree distributions of the graph. These asymptotics are tested against several real‐world networks and are shown to be accurate for sufficiently dense graphs. While the approach starts to fail for “subway‐like” networks that contain many paths of vertices of degree two, different asymptotics are derived for such networks.
{"title":"New approximations for network reliability","authors":"Jason I. Brown, Theodore Kolokolnikov, Robert E. Kooij","doi":"10.1002/net.22215","DOIUrl":"https://doi.org/10.1002/net.22215","url":null,"abstract":"We introduce two new methods for approximating the all‐terminal reliability of undirected graphs. First, we introduce an edge removal process: remove edges at random, one at a time, until the graph becomes disconnected. We show that the expected number of edges thus removed is equal to , where is the number of edges in the graph, and is the average of the all‐terminal reliability polynomial. Based on this process, we propose a Monte‐Carlo algorithm to quickly estimate the graph reliability (whose exact computation is NP‐hard). Moreover, we show that the distribution of the edge removal process can be used to quickly approximate the reliability polynomial. We then propose increasingly accurate asymptotics for graph reliability based solely on degree distributions of the graph. These asymptotics are tested against several real‐world networks and are shown to be accurate for sufficiently dense graphs. While the approach starts to fail for “subway‐like” networks that contain many paths of vertices of degree two, different asymptotics are derived for such networks.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"86 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140033495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The commercial and operations planning in airlines has traditionally been a hierarchical process starting with flight schedule design, followed by fleet assignment, aircraft rotation planning and finally the crew scheduling. The hierarchical planning approach has a drawback that the optimal solution for a planning phase higher in hierarchy may either be infeasible for the subsequent phase or may lead to a sub-optimal overall solution. In this paper, we solve a profit-maximizing integrated planning model for clean-sheet “rotated” schedule design with flight re-time option and crew scheduling for a low-cost carrier (LCC) in an emerging market. While the aircraft rotation problem has been traditionally modeled in the literature as a daily routing of individual aircraft for maintenance requirement, in this work we address the requirement of planned aircraft rotations as part of schedule design for LCCs. The planned aircraft routing is important in our case to create as many via-flights as possible due to the underserved nature of the emerging market. We solve this large-scale integer-programming problem using two approaches – Benders Decomposition and Lagrangian Relaxation. For Lagrangian Relaxation, we exploit the special structure of our problem and intuitive understanding behind the Lagrangian duals to develop a multiplier adjustment approach to find an improved lower bound of integrated model solution. The crew-pairing sub-problem is solved using column generation through multi-label shortest path algorithm followed by branch-and-price for integer solution. We test our solution methodology on a flight universe of 378 unique flights for different problem sizes by varying the number of aircraft available for operations. Our computational results show that within a reasonable run time of few hours both the approaches, Benders Decomposition and Lagrangian Relaxation, are successful in finding lower bounds of the integrated model solution, which are higher than the solution of traditional hierarchical approach by 0.5%–2.5%. We find Lagrangian Relaxation methodology to usually attain an improved solution faster than the Benders Decomposition approach, particularly for large-scale problems.
{"title":"Integrated commercial and operations planning model for schedule design, aircraft rotation and crew scheduling in airlines","authors":"Ankur Garg, Yogesh Agarwal, Rajiv Kumar Srivastava, Suresh Kumar Jakhar","doi":"10.1002/net.22211","DOIUrl":"https://doi.org/10.1002/net.22211","url":null,"abstract":"The commercial and operations planning in airlines has traditionally been a hierarchical process starting with flight schedule design, followed by fleet assignment, aircraft rotation planning and finally the crew scheduling. The hierarchical planning approach has a drawback that the optimal solution for a planning phase higher in hierarchy may either be infeasible for the subsequent phase or may lead to a sub-optimal overall solution. In this paper, we solve a profit-maximizing integrated planning model for clean-sheet “rotated” schedule design with flight re-time option and crew scheduling for a low-cost carrier (LCC) in an emerging market. While the aircraft rotation problem has been traditionally modeled in the literature as a daily routing of individual aircraft for maintenance requirement, in this work we address the requirement of planned aircraft rotations as part of schedule design for LCCs. The planned aircraft routing is important in our case to create as many via-flights as possible due to the underserved nature of the emerging market. We solve this large-scale integer-programming problem using two approaches – Benders Decomposition and Lagrangian Relaxation. For Lagrangian Relaxation, we exploit the special structure of our problem and intuitive understanding behind the Lagrangian duals to develop a multiplier adjustment approach to find an improved lower bound of integrated model solution. The crew-pairing sub-problem is solved using column generation through multi-label shortest path algorithm followed by branch-and-price for integer solution. We test our solution methodology on a flight universe of 378 unique flights for different problem sizes by varying the number of aircraft available for operations. Our computational results show that within a reasonable run time of few hours both the approaches, Benders Decomposition and Lagrangian Relaxation, are successful in finding lower bounds of the integrated model solution, which are higher than the solution of traditional hierarchical approach by 0.5%–2.5%. We find Lagrangian Relaxation methodology to usually attain an improved solution faster than the Benders Decomposition approach, particularly for large-scale problems.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"239 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139554326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}