Hao Pan, Yajun Lu, Balabhaskar Balasundaram, Juan S. Borrero
The analysis of social and biological networks often involves modeling clusters of interest as cliques or their graph‐theoretic generalizations. The ‐club model, which relaxes the requirement of pairwise adjacency in a clique to length‐bounded paths inside the cluster, has been used to model cohesive subgroups in social networks and functional modules or complexes in biological networks. However, if the graphs are time‐varying, or if they change under different conditions, we may be interested in clusters that preserve their property over time or under changes in conditions. To model such clusters that are conserved in a collection of graphs, we consider a cross‐graph‐club model, a subset of nodes that forms a ‐club in every graph in the collection. In this article, we consider the canonical optimization problem of finding a cross‐graph ‐club of maximum cardinality in a graph collection. We develop integer programming approaches to solve this problem. Specifically, we introduce strengthened formulations, valid inequalities, and branch‐and‐cut algorithms based on delayed constraint generation. The results of our computational study indicate the significant benefits of using the approaches we introduce.
{"title":"Finding conserved low‐diameter subgraphs in social and biological networks","authors":"Hao Pan, Yajun Lu, Balabhaskar Balasundaram, Juan S. Borrero","doi":"10.1002/net.22246","DOIUrl":"https://doi.org/10.1002/net.22246","url":null,"abstract":"The analysis of social and biological networks often involves modeling clusters of interest as <jats:italic>cliques</jats:italic> or their graph‐theoretic generalizations. The ‐club model, which relaxes the requirement of pairwise adjacency in a clique to length‐bounded paths inside the cluster, has been used to model cohesive subgroups in social networks and functional modules or complexes in biological networks. However, if the graphs are time‐varying, or if they change under different conditions, we may be interested in clusters that preserve their property over time or under changes in conditions. To model such clusters that are conserved in a collection of graphs, we consider a <jats:italic>cross‐graph</jats:italic> <jats:italic>‐club</jats:italic> model, a subset of nodes that forms a ‐club in every graph in the collection. In this article, we consider the canonical optimization problem of finding a cross‐graph ‐club of maximum cardinality in a graph collection. We develop integer programming approaches to solve this problem. Specifically, we introduce strengthened formulations, valid inequalities, and branch‐and‐cut algorithms based on delayed constraint generation. The results of our computational study indicate the significant benefits of using the approaches we introduce.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"6 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142195814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Centrality metrics have become a popular concept in network science and optimization. Over the years, centrality has been used to assign importance and identify influential elements in various settings, including transportation, infrastructure, biological, and social networks, among others. That said, most of the literature has focused on nodal versions of centrality. Recently, group counterparts of centrality have started attracting scientific and practitioner interest. The identification of sets of nodes that are influential within a network is becoming increasingly more important. This is even more pronounced when these sets of nodes are required to induce a certain motif or structure. In this study, we review group centrality metrics from an operations research and optimization perspective for the first time. This is particularly interesting due to the rapid evolution and development of this area in the operations research community over the last decade. We first present a historical overview of how we have reached this point in the study of group centrality. We then discuss the different structures and motifs that appear prominently in the literature, alongside the techniques and methodologies that are popular. We finally present possible avenues and directions for future work, mainly in three areas: (i) probabilistic metrics to account for randomness along with stochastic optimization techniques; (ii) structures and relaxations that have not been yet studied; and (iii) new emerging applications that can take advantage of group centrality. Our survey offers a concise review of group centrality and its intersection with network analysis and optimization.
{"title":"A survey on optimization studies of group centrality metrics","authors":"Mustafa Can Camur, Chrysafis Vogiatzis","doi":"10.1002/net.22248","DOIUrl":"https://doi.org/10.1002/net.22248","url":null,"abstract":"Centrality metrics have become a popular concept in network science and optimization. Over the years, centrality has been used to assign importance and identify influential elements in various settings, including transportation, infrastructure, biological, and social networks, among others. That said, most of the literature has focused on nodal versions of centrality. Recently, group counterparts of centrality have started attracting scientific and practitioner interest. The identification of sets of nodes that are influential within a network is becoming increasingly more important. This is even more pronounced when these sets of nodes are required to induce a certain motif or structure. In this study, we review group centrality metrics from an operations research and optimization perspective for the first time. This is particularly interesting due to the rapid evolution and development of this area in the operations research community over the last decade. We first present a historical overview of how we have reached this point in the study of group centrality. We then discuss the different structures and motifs that appear prominently in the literature, alongside the techniques and methodologies that are popular. We finally present possible avenues and directions for future work, mainly in three areas: (i) probabilistic metrics to account for randomness along with stochastic optimization techniques; (ii) structures and relaxations that have not been yet studied; and (iii) new emerging applications that can take advantage of group centrality. Our survey offers a concise review of group centrality and its intersection with network analysis and optimization.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"45 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142195813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Asín‐Achá, Alexis Espinoza, Olivier Goldschmidt, Dorit S. Hochbaum, Isaías I. Huerta
We present machine learning (ML) methods for automatically selecting a “best” performing fast algorithm for the capacitated vehicle routing problem (CVRP) with unit demands. Algorithm selection is to automatically choose among a portfolio of algorithms the one that is predicted to work best for a given problem instance, and algorithm configuration is to automatically select algorithm's parameters that are predicted to work best for a given problem instance. We present a framework incorporating both algorithm selection and configuration for a portfolio that includes the automatically configured “Sweep Algorithm,” the first generated feasible solution of the hybrid genetic search algorithm, and the Clarke and Wright algorithm. The automatically selected algorithm is shown here to deliver high‐quality feasible solutions within very small running times making it highly suitable for real‐time applications and for generating initial feasible solutions for global optimization methods for CVRP. These results bode well to the effectiveness of utilizing ML for improving combinatorial optimization methods.
我们提出了机器学习(ML)方法,用于为有单位需求的有容量车辆路由问题(CVRP)自动选择性能 "最佳 "的快速算法。算法选择是在算法组合中自动选择预计对给定问题实例效果最佳的算法,算法配置是自动选择预计对给定问题实例效果最佳的算法参数。我们提出了一个包含算法选择和组合配置的框架,其中包括自动配置的 "横扫算法"、混合遗传搜索算法首次生成的可行解,以及克拉克和莱特算法。自动选择的算法可在极短的运行时间内提供高质量的可行解决方案,因此非常适合实时应用,也非常适合为 CVRP 的全局优化方法生成初始可行解决方案。这些结果预示着利用 ML 改进组合优化方法的有效性。
{"title":"Selecting fast algorithms for the capacitated vehicle routing problem with machine learning techniques","authors":"Roberto Asín‐Achá, Alexis Espinoza, Olivier Goldschmidt, Dorit S. Hochbaum, Isaías I. Huerta","doi":"10.1002/net.22244","DOIUrl":"https://doi.org/10.1002/net.22244","url":null,"abstract":"We present machine learning (ML) methods for automatically selecting a “best” performing fast algorithm for the capacitated vehicle routing problem (CVRP) with unit demands. <jats:italic>Algorithm selection</jats:italic> is to automatically choose among a portfolio of algorithms the one that is predicted to work best for a given problem instance, and <jats:italic>algorithm configuration</jats:italic> is to automatically select algorithm's parameters that are predicted to work best for a given problem instance. We present a framework incorporating both algorithm selection and configuration for a portfolio that includes the automatically configured “Sweep Algorithm,” the first generated feasible solution of the hybrid genetic search algorithm, and the Clarke and Wright algorithm. The automatically selected algorithm is shown here to deliver high‐quality feasible solutions within very small running times making it highly suitable for real‐time applications and for generating initial feasible solutions for global optimization methods for CVRP. These results bode well to the effectiveness of utilizing ML for improving combinatorial optimization methods.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"26 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matteo Petris, Claudia Archetti, Diego Cattaruzza, Maxime Ogier, Frédéric Semet
The commodity constrained split delivery vehicle routing problem (C‐SDVRP) is a routing problem where customer demands are composed of multiple commodities. A fleet of capacitated vehicles must serve customer demands in a way that minimizes the total routing costs. Vehicles can transport any set of commodities and customers are allowed to be visited multiple times. However, the demand for a single commodity must be delivered by one vehicle only. In this work, we developed a heuristic with a performance guarantee to solve the C‐SDVRP. The proposed heuristic is based on a set covering formulation, where the exponentially‐many variables correspond to routes. First, a subset of the variables is obtained by solving the linear relaxation of the formulation by means of a column generation approach which embeds a new pricing heuristic aimed to reduce the computational time. Solving the linear relaxation gives a valid lower bound used as a performance guarantee for the heuristic. Then, we devise a restricted master heuristic to provide good upper bounds: the formulation is restricted to the subset of variables found so far and solved as an integer program with a commercial solver. A local search based on a mathematical programming operator is applied to improve the solution. We test the heuristic algorithm on benchmark instances from the literature. The comparison with the state‐of‐the‐art heuristics for solving the C‐SDVRP shows that our approach significantly improves the solution time, while keeping a comparable solution quality and improving some best‐known solutions. In addition, our approach is able to solve large instances with 100 customers and six commodities, and also provides very good quality lower bounds. Furthermore, an instance of the C‐SDVRP can be transformed into a CVRP instance by simply duplicating each customer as many times as the requested commodities and by assigning as demand the demand of the single commodity. Hence, we compare heuristics for the C‐SDVRP against the state‐of‐the‐art heuristic for the Capacitated Vehicle Routing Problem (CVRP). The latter approach revealed to have the best performance. However, our approach provides solutions of comparable quality and has the interest of providing a performance guarantee.
{"title":"A heuristic with a performance guarantee for the commodity constrained split delivery vehicle routing problem","authors":"Matteo Petris, Claudia Archetti, Diego Cattaruzza, Maxime Ogier, Frédéric Semet","doi":"10.1002/net.22238","DOIUrl":"https://doi.org/10.1002/net.22238","url":null,"abstract":"The commodity constrained split delivery vehicle routing problem (C‐SDVRP) is a routing problem where customer demands are composed of multiple commodities. A fleet of capacitated vehicles must serve customer demands in a way that minimizes the total routing costs. Vehicles can transport any set of commodities and customers are allowed to be visited multiple times. However, the demand for a single commodity must be delivered by one vehicle only. In this work, we developed a heuristic with a performance guarantee to solve the C‐SDVRP. The proposed heuristic is based on a set covering formulation, where the exponentially‐many variables correspond to routes. First, a subset of the variables is obtained by solving the linear relaxation of the formulation by means of a column generation approach which embeds a new pricing heuristic aimed to reduce the computational time. Solving the linear relaxation gives a valid lower bound used as a performance guarantee for the heuristic. Then, we devise a restricted master heuristic to provide good upper bounds: the formulation is restricted to the subset of variables found so far and solved as an integer program with a commercial solver. A local search based on a mathematical programming operator is applied to improve the solution. We test the heuristic algorithm on benchmark instances from the literature. The comparison with the state‐of‐the‐art heuristics for solving the C‐SDVRP shows that our approach significantly improves the solution time, while keeping a comparable solution quality and improving some best‐known solutions. In addition, our approach is able to solve large instances with 100 customers and six commodities, and also provides very good quality lower bounds. Furthermore, an instance of the C‐SDVRP can be transformed into a CVRP instance by simply duplicating each customer as many times as the requested commodities and by assigning as demand the demand of the single commodity. Hence, we compare heuristics for the C‐SDVRP against the state‐of‐the‐art heuristic for the Capacitated Vehicle Routing Problem (CVRP). The latter approach revealed to have the best performance. However, our approach provides solutions of comparable quality and has the interest of providing a performance guarantee.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"49 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we develop novel mathematical models to optimize utilization of community energy storage (CES) by clustering prosumers and consumers into energy sharing communities/microgrids in the context of a smart city. Three different microgrid configurations are modeled using a unifying mixed‐integer linear programming formulation. These configurations represent three different business models, namely: the island model, the interconnected model, and the Energy Service Companies model. The proposed mathematical formulations determine the optimal households' aggregation as well as the location and sizing of CES. To overcome the computational challenges of treating operational decisions within a multi‐period decision making framework, we also propose a decomposition approach to accelerate the computational time needed to solve larger instances. We conduct a case study based on real power consumption, power generation, and location network data from Cambridge, MA. Our mathematical models and the underlying algorithmic framework can be used in operational and strategic planning studies on smart grids to incentivize the communitarian distributed renewable energy generation and to improve the self‐consumption and self‐sufficiency of the energy sharing community. The models are also targeted to policymakers of smart cities, utility companies, and Energy Service Companies as the proposed models support decision making on renewable energy related projects investments.
在本文中,我们建立了新颖的数学模型,通过在智慧城市背景下将消费者和消费者聚集到能源共享社区/微电网中,优化社区储能(CES)的利用。使用统一的混合整数线性编程公式对三种不同的微电网配置进行建模。这些配置代表了三种不同的商业模式,即:孤岛模式、互联模式和能源服务公司模式。所提出的数学公式确定了最佳家庭聚合以及 CES 的位置和规模。为了克服在多期决策框架内处理运营决策所带来的计算挑战,我们还提出了一种分解方法,以加快解决大型实例所需的计算时间。我们基于马萨诸塞州剑桥市的实际耗电量、发电量和位置网络数据进行了案例研究。我们的数学模型和基础算法框架可用于智能电网的运营和战略规划研究,以激励社区分布式可再生能源发电,提高能源共享社区的自我消费和自给自足能力。这些模型也适用于智能城市、公用事业公司和能源服务公司的政策制定者,因为所提出的模型可为可再生能源相关项目的投资决策提供支持。
{"title":"Three network design problems for community energy storage","authors":"Bissan Ghaddar, Ivana Ljubić, Yuying Qiu","doi":"10.1002/net.22242","DOIUrl":"https://doi.org/10.1002/net.22242","url":null,"abstract":"In this article, we develop novel mathematical models to optimize utilization of community energy storage (CES) by clustering prosumers and consumers into energy sharing communities/microgrids in the context of a smart city. Three different microgrid configurations are modeled using a unifying mixed‐integer linear programming formulation. These configurations represent three different business models, namely: the island model, the interconnected model, and the Energy Service Companies model. The proposed mathematical formulations determine the optimal households' aggregation as well as the location and sizing of CES. To overcome the computational challenges of treating operational decisions within a multi‐period decision making framework, we also propose a decomposition approach to accelerate the computational time needed to solve larger instances. We conduct a case study based on real power consumption, power generation, and location network data from Cambridge, MA. Our mathematical models and the underlying algorithmic framework can be used in operational and strategic planning studies on smart grids to incentivize the communitarian distributed renewable energy generation and to improve the self‐consumption and self‐sufficiency of the energy sharing community. The models are also targeted to policymakers of smart cities, utility companies, and Energy Service Companies as the proposed models support decision making on renewable energy related projects investments.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"24 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a reinforcement learning‐based heuristic for a two‐player interdiction game called the dynamic shortest path interdiction problem (DSPI). The DSPI involves an evader and an interdictor who take turns in the problem, with the interdictor selecting a set of arcs to attack and the evader choosing an arc to traverse at each step of the game. Our model employs the Monte Carlo tree search framework to learn a policy for the players using randomized roll‐outs. This policy is stored as an asymmetric game tree and can be further refined as the game unfolds. We leverage alpha–beta pruning and existing bounding schemes in the literature to prune suboptimal branches. Our numerical experiments demonstrate that the prescribed approach yields near‐optimal solutions in many cases and allows for flexibility in balancing solution quality and computational effort.
{"title":"Monte Carlo tree search for dynamic shortest‐path interdiction","authors":"Alexey A. Bochkarev, J. Cole Smith","doi":"10.1002/net.22243","DOIUrl":"https://doi.org/10.1002/net.22243","url":null,"abstract":"We present a reinforcement learning‐based heuristic for a two‐player interdiction game called the dynamic shortest path interdiction problem (DSPI). The DSPI involves an evader and an interdictor who take turns in the problem, with the interdictor selecting a set of arcs to attack and the evader choosing an arc to traverse at each step of the game. Our model employs the Monte Carlo tree search framework to learn a policy for the players using randomized roll‐outs. This policy is stored as an asymmetric game tree and can be further refined as the game unfolds. We leverage alpha–beta pruning and existing bounding schemes in the literature to prune suboptimal branches. Our numerical experiments demonstrate that the prescribed approach yields near‐optimal solutions in many cases and allows for flexibility in balancing solution quality and computational effort.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"1 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141585745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article addresses the linear optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The article first gives a fairly complete picture of the computational complexity of this optimization problem, its relation to optimization over the core itself, and its equivalence to other, minimal core relaxations that have been proposed earlier. We then address minimum cost spanning tree (MST) games as an example for a class of cost sharing games with non‐empty core. While submodular cost functions yield efficient algorithms to maximize shareable costs, MST games have cost functions that are subadditive, but generally not submodular. Nevertheless, it is well known that cost shares in the core of MST games can be found efficiently. In contrast, we show that the maximization of shareable costs is ‐hard for MST games and derive a 2‐approximation algorithm. Our work opens several directions for future research.
本文探讨了一个线性优化问题,即在保持合作可转移效用博弈(或称 TU 博弈)核心约束条件稳定性的同时,最大化一组代理之间可分担的总成本。在最大化可分担的总成本时,成本分担必须满足定义 TU 博弈核心的所有约束条件,预算平衡除外。文章首先相当全面地介绍了这一优化问题的计算复杂性、它与核心本身优化的关系,以及它与早先提出的其他最小核心松弛的等价性。然后,我们以最小成本生成树(MST)博弈为例,讨论了一类具有非空核心的成本分摊博弈。亚模态成本函数产生了最大化可分摊成本的高效算法,而 MST 博弈的成本函数是亚正数,但一般不是亚模态的。尽管如此,众所周知,MST博弈核心中的成本份额可以高效地找到。与此相反,我们证明了 MST 博弈的可分担成本最大化是很难的,并推导出了一种 2 近似算法。我们的工作为未来研究开辟了几个方向。
{"title":"Algorithmic solutions for maximizing shareable costs","authors":"Rong Zou, Boyue Lin, Marc Uetz, Matthias Walter","doi":"10.1002/net.22240","DOIUrl":"https://doi.org/10.1002/net.22240","url":null,"abstract":"This article addresses the linear optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The article first gives a fairly complete picture of the computational complexity of this optimization problem, its relation to optimization over the core itself, and its equivalence to other, minimal core relaxations that have been proposed earlier. We then address minimum cost spanning tree (MST) games as an example for a class of cost sharing games with non‐empty core. While submodular cost functions yield efficient algorithms to maximize shareable costs, MST games have cost functions that are subadditive, but generally not submodular. Nevertheless, it is well known that cost shares in the core of MST games can be found efficiently. In contrast, we show that the maximization of shareable costs is ‐hard for MST games and derive a 2‐approximation algorithm. Our work opens several directions for future research.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"8 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maite Dewinter, Caroline Jagtenberg, Christophe Vandeviver, Philipp M. Dau, Tom Vander Beken, Frank Witlox
Police forces around the world are adapting to optimize their current practices through intelligence‐led and evidence‐based policing. This trend towards increasingly data‐driven policing also affects daily police routines. Police patrol is a complex routing problem because of the combination of reactive and proactive tasks. Moreover, a trade‐off exists between these two patrol tasks. In this article, a police patrol algorithm that combines both policing strategies into one strategy and is applicable to everyday policing, is developed. To this end, a discrete event simulation model is built that compares a p‐median redeployment strategy with several benchmark strategies, that is, p‐median deployment, hotspot (re)deployment, and random redeployment. This p‐median redeployment strategy considers the continuous alternation of idle and non‐idle vehicles. The mean response time was lowest for the p‐median deployment strategy, but the redeployment strategy results in better coverage of the area and low mean response times.
世界各地的警察部队都在进行调整,通过情报主导和循证警务来优化其现行做法。这种日益以数据为导向的警务趋势也影响着日常警务工作。警察巡逻是一个复杂的路由问题,因为它既要执行被动任务,又要执行主动任务。此外,这两项巡逻任务之间还存在权衡问题。本文开发了一种警察巡逻算法,它将两种警务策略合二为一,并适用于日常警务工作。为此,本文建立了一个离散事件仿真模型,将 p 中值重新部署策略与几种基准策略(即 p 中值部署、热点(重新)部署和随机重新部署)进行比较。这种 p 中值重新部署策略考虑了空闲和非空闲车辆的连续交替。p-median 部署策略的平均响应时间最短,但重新部署策略的区域覆盖率更高,平均响应时间更短。
{"title":"Reducing police response times: Optimization and simulation of everyday police patrol","authors":"Maite Dewinter, Caroline Jagtenberg, Christophe Vandeviver, Philipp M. Dau, Tom Vander Beken, Frank Witlox","doi":"10.1002/net.22241","DOIUrl":"https://doi.org/10.1002/net.22241","url":null,"abstract":"Police forces around the world are adapting to optimize their current practices through intelligence‐led and evidence‐based policing. This trend towards increasingly data‐driven policing also affects daily police routines. Police patrol is a complex routing problem because of the combination of reactive and proactive tasks. Moreover, a trade‐off exists between these two patrol tasks. In this article, a police patrol algorithm that combines both policing strategies into one strategy and is applicable to everyday policing, is developed. To this end, a discrete event simulation model is built that compares a p‐median redeployment strategy with several benchmark strategies, that is, p‐median deployment, hotspot (re)deployment, and random redeployment. This p‐median redeployment strategy considers the continuous alternation of idle and non‐idle vehicles. The mean response time was lowest for the p‐median deployment strategy, but the redeployment strategy results in better coverage of the area and low mean response times.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"95 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optimizing the order picking operations is indispensable for warehouses that promise a high customer service level. While many areas for improvement have been identified and studied in the literature, a large gap remains between academia and practice. To help with closing this gap, we perform a case‐study in collaboration with a spare‐parts warehouse in Belgium. In this study, we optimize the order picking operations of the company, using the actual warehouse layout and real order data. A state‐of‐the‐art online integrated order batching, picker routing and batch scheduling algorithm is adapted to consider multiple real‐life constraints. More specifically, the dynamic arrival of new orders is considered, and a capacity constraint on the sorting installation should be respected. Furthermore, a new waiting strategy is studied in which order pickers can temporarily postpone certain orders, as combining them with possible future order arrivals may allow for more efficient overall picking performance. Finally, the performance of the current operating policy is compared with that of both a seed batching heuristic and our metaheuristic algorithm by use of an ANOVA analysis. The results indicate that the number of order pickers can be reduced by 12.5% if the new optimization algorithm is used, accompanied by an improvement in the offered customer service level.
{"title":"A real‐life study on the value of integrated optimization in order picking operations under dynamic order arrivals","authors":"Ruben D'Haen, Katrien Ramaekers, Stef Moons, Kris Braekers","doi":"10.1002/net.22237","DOIUrl":"https://doi.org/10.1002/net.22237","url":null,"abstract":"Optimizing the order picking operations is indispensable for warehouses that promise a high customer service level. While many areas for improvement have been identified and studied in the literature, a large gap remains between academia and practice. To help with closing this gap, we perform a case‐study in collaboration with a spare‐parts warehouse in Belgium. In this study, we optimize the order picking operations of the company, using the actual warehouse layout and real order data. A state‐of‐the‐art online integrated order batching, picker routing and batch scheduling algorithm is adapted to consider multiple real‐life constraints. More specifically, the dynamic arrival of new orders is considered, and a capacity constraint on the sorting installation should be respected. Furthermore, a new waiting strategy is studied in which order pickers can temporarily postpone certain orders, as combining them with possible future order arrivals may allow for more efficient overall picking performance. Finally, the performance of the current operating policy is compared with that of both a seed batching heuristic and our metaheuristic algorithm by use of an ANOVA analysis. The results indicate that the number of order pickers can be reduced by 12.5% if the new optimization algorithm is used, accompanied by an improvement in the offered customer service level.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"29 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tasnim Ibn Faiz, Chrysafis Vogiatzis, Jiongbai Liu, Md. Noor‐E‐Alam
Providing first aid and other supplies (e.g., epi‐pens, medical supplies, dry food, water) during and after a disaster is always challenging. The complexity of these operations increases when the transportation, power, and communications networks fail, leaving people stranded and unable to communicate their locations and needs. The advent of emerging technologies like uncrewed autonomous vehicles can help humanitarian logistics providers reach otherwise stranded populations after transportation network failures. However, due to the failures in telecommunication infrastructure, demand for emergency aid can become uncertain. To address the challenges of delivering emergency aid to trapped populations with failing infrastructure networks, we propose a novel robust computational framework for a two‐echelon vehicle routing problem that uses uncrewed autonomous vehicles (UAVs), or drones, for the deliveries. We formulate the problem as a two‐stage robust optimization model to handle demand uncertainty. Then, we propose a column‐and‐constraint generation approach for worst‐case demand scenario generation for a given set of truck and UAV routes. Moreover, we develop a decomposition scheme inspired by the column generation approach to generate UAV routes for a set of demand scenarios heuristically. Finally, we combine the decomposition scheme within the column‐and‐constraint generation approach to determine robust routes for both trucks (first echelon vehicles) and UAVs (second echelon vehicles), the time that affected communities are served, and the quantities of aid materials delivered. To validate our proposed algorithms, we use a simulated dataset that aims to recreate emergency aid requests in different areas of Puerto Rico after Hurricane Maria in 2017.
{"title":"A robust optimization framework for two‐echelon vehicle and UAV routing for post‐disaster humanitarian logistics operations","authors":"Tasnim Ibn Faiz, Chrysafis Vogiatzis, Jiongbai Liu, Md. Noor‐E‐Alam","doi":"10.1002/net.22233","DOIUrl":"https://doi.org/10.1002/net.22233","url":null,"abstract":"Providing first aid and other supplies (e.g., epi‐pens, medical supplies, dry food, water) during and after a disaster is always challenging. The complexity of these operations increases when the transportation, power, and communications networks fail, leaving people stranded and unable to communicate their locations and needs. The advent of emerging technologies like uncrewed autonomous vehicles can help humanitarian logistics providers reach otherwise stranded populations after transportation network failures. However, due to the failures in telecommunication infrastructure, demand for emergency aid can become uncertain. To address the challenges of delivering emergency aid to trapped populations with failing infrastructure networks, we propose a novel robust computational framework for a two‐echelon vehicle routing problem that uses uncrewed autonomous vehicles (UAVs), or drones, for the deliveries. We formulate the problem as a two‐stage robust optimization model to handle demand uncertainty. Then, we propose a column‐and‐constraint generation approach for worst‐case demand scenario generation for a given set of truck and UAV routes. Moreover, we develop a decomposition scheme inspired by the column generation approach to generate UAV routes for a set of demand scenarios heuristically. Finally, we combine the decomposition scheme within the column‐and‐constraint generation approach to determine robust routes for both trucks (first echelon vehicles) and UAVs (second echelon vehicles), the time that affected communities are served, and the quantities of aid materials delivered. To validate our proposed algorithms, we use a simulated dataset that aims to recreate emergency aid requests in different areas of Puerto Rico after Hurricane Maria in 2017.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"30 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}