The orienteering problem (OP) and prize‐collecting traveling salesman problem (PCTSP) are two typical TSPs with profits, in which each vertex has a profit and the goal is to visit several vertices to optimize the collected profit and travel costs. The OP aims to collect the maximum profit without exceeding the given travel cost. The PCTSP seeks to minimize the travel costs while ensuring a minimum profit threshold. This study introduces a hybrid genetic algorithm that addresses both the OP and PCTSP under a unified framework. The algorithm combines an extended edge‐assembly crossover operator to produce promising offspring solutions, and an effective local search to ameliorate each offspring solution. The algorithm is further enforced by diversification‐oriented mutation and population‐diversity management. Extensive experiments demonstrate that the method competes favorably with the best existing methods in terms of both the solution quality and computational efficiency. Additional experiments provide insights into the roles of the key components of the proposed method.
{"title":"Hybrid genetic algorithm for undirected traveling salesman problems with profits","authors":"P. He, Jin-Kao Hao, Qinghua Wu","doi":"10.1002/net.22167","DOIUrl":"https://doi.org/10.1002/net.22167","url":null,"abstract":"The orienteering problem (OP) and prize‐collecting traveling salesman problem (PCTSP) are two typical TSPs with profits, in which each vertex has a profit and the goal is to visit several vertices to optimize the collected profit and travel costs. The OP aims to collect the maximum profit without exceeding the given travel cost. The PCTSP seeks to minimize the travel costs while ensuring a minimum profit threshold. This study introduces a hybrid genetic algorithm that addresses both the OP and PCTSP under a unified framework. The algorithm combines an extended edge‐assembly crossover operator to produce promising offspring solutions, and an effective local search to ameliorate each offspring solution. The algorithm is further enforced by diversification‐oriented mutation and population‐diversity management. Extensive experiments demonstrate that the method competes favorably with the best existing methods in terms of both the solution quality and computational efficiency. Additional experiments provide insights into the roles of the key components of the proposed method.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"82 1","pages":"189 - 221"},"PeriodicalIF":2.1,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42557291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network flow problems form an important and much‐studied family of combinatorial optimization problems, with a huge array of practical applications. Two network flow problems in particular have received a great deal of attention: the maximum flow and minimum‐cost flow problems. We review the progress that has been made on exact solution algorithms for these two problems, with an emphasis on worst‐case running times.
{"title":"A survey on exact algorithms for the maximum flow and minimum‐cost flow problems","authors":"O. Cruz-Mejía, A. Letchford","doi":"10.1002/net.22169","DOIUrl":"https://doi.org/10.1002/net.22169","url":null,"abstract":"Network flow problems form an important and much‐studied family of combinatorial optimization problems, with a huge array of practical applications. Two network flow problems in particular have received a great deal of attention: the maximum flow and minimum‐cost flow problems. We review the progress that has been made on exact solution algorithms for these two problems, with an emphasis on worst‐case running times.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"82 1","pages":"167 - 176"},"PeriodicalIF":2.1,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46083999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nili Guttmann-Beck, Hadas Meshita‐Sayag, Michal Stern
Let H=⟨V,𝒮⟩ be a hypergraph, where V$$ V $$ is a set of vertices and 𝒮 is a set of clusters S1,…,Sm$$ {S}_1,dots, {S}_m $$ , Si⊆V$$ {S}_isubseteq V $$ , such that the clusters in 𝒮 are not necessarily disjoint. This article considers the feasibility clustered traveling salesman problem, denoted by FCTSP$$ FCTSP $$ . In the FCTSP$$ FCTSP $$ we aim to decide whether a simple path exists that visits each vertex exactly once, such that the vertices of each cluster are visited consecutively. We focus on hypergraphs with no feasible solution path and consider removing vertices from clusters, such that the hypergraph with the new clusters has a feasible solution path for FCTSP$$ FCTSP $$ . The algorithm uses a PQ‐tree data structure and runs in linear time.
{"title":"Achieving feasibility for clustered traveling salesman problems using PQ‐trees","authors":"Nili Guttmann-Beck, Hadas Meshita‐Sayag, Michal Stern","doi":"10.1002/net.22164","DOIUrl":"https://doi.org/10.1002/net.22164","url":null,"abstract":"Let H=⟨V,𝒮⟩ be a hypergraph, where V$$ V $$ is a set of vertices and 𝒮 is a set of clusters S1,…,Sm$$ {S}_1,dots, {S}_m $$ , Si⊆V$$ {S}_isubseteq V $$ , such that the clusters in 𝒮 are not necessarily disjoint. This article considers the feasibility clustered traveling salesman problem, denoted by FCTSP$$ FCTSP $$ . In the FCTSP$$ FCTSP $$ we aim to decide whether a simple path exists that visits each vertex exactly once, such that the vertices of each cluster are visited consecutively. We focus on hypergraphs with no feasible solution path and consider removing vertices from clusters, such that the hypergraph with the new clusters has a feasible solution path for FCTSP$$ FCTSP $$ . The algorithm uses a PQ‐tree data structure and runs in linear time.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"82 1","pages":"153 - 166"},"PeriodicalIF":2.1,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43459681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The discrete ‐neighbor ‐center problem (d‐‐CP) is an emerging variant of the classical ‐center problem which recently got attention in literature. In this problem, we are given a discrete set of points and we need to locate facilities on these points in such a way that the maximum distance between each point where no facility is located and its ‐closest facility is minimized. The only existing algorithms in literature for solving the d‐‐CP are approximation algorithms and two recently proposed heuristics. In this work, we present two integer programming formulations for the d‐‐CP, together with lifting of inequalities, valid inequalities, inequalities that do not change the optimal objective function value and variable fixing procedures. We provide theoretical results on the strength of the formulations and convergence results for the lower bounds obtained after applying the lifting procedures or the variable fixing procedures in an iterative fashion. Based on our formulations and theoretical results, we develop branch‐and‐cut (B&C ) algorithms, which are further enhanced with a starting heuristic and a primal heuristic. We evaluate the effectiveness of our B&C algorithms using instances from literature. Our algorithms are able to solve 116 out of 194 instances from literature to proven optimality, with a runtime of under a minute for most of them. By doing so, we also provide improved solution values for 116 instances.
{"title":"Exact solution approaches for the discrete <i>α</i>‐neighbor <i>p</i>‐center problem","authors":"Elisabeth Gaar, Markus Sinnl","doi":"10.1002/net.22162","DOIUrl":"https://doi.org/10.1002/net.22162","url":null,"abstract":"The discrete ‐neighbor ‐center problem (d‐‐CP) is an emerging variant of the classical ‐center problem which recently got attention in literature. In this problem, we are given a discrete set of points and we need to locate facilities on these points in such a way that the maximum distance between each point where no facility is located and its ‐closest facility is minimized. The only existing algorithms in literature for solving the d‐‐CP are approximation algorithms and two recently proposed heuristics. In this work, we present two integer programming formulations for the d‐‐CP, together with lifting of inequalities, valid inequalities, inequalities that do not change the optimal objective function value and variable fixing procedures. We provide theoretical results on the strength of the formulations and convergence results for the lower bounds obtained after applying the lifting procedures or the variable fixing procedures in an iterative fashion. Based on our formulations and theoretical results, we develop branch‐and‐cut (B&C ) algorithms, which are further enhanced with a starting heuristic and a primal heuristic. We evaluate the effectiveness of our B&C algorithms using instances from literature. Our algorithms are able to solve 116 out of 194 instances from literature to proven optimality, with a runtime of under a minute for most of them. By doing so, we also provide improved solution values for 116 instances.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136106157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michele Barbato, Francisco Canas, Luís Gouveia, Pierre Pesneau
Abstract In this paper, we introduce, study and analyze several classes of compact formulations for the symmetric Hamiltonian ‐median problem (HMP). Given a positive integer and a weighted complete undirected graph with weights on the edges, the HMP on is to find a minimum weight set of elementary cycles partitioning the vertices of . The advantage of developing compact formulations is that they can be readily used in combination with off‐the‐shelf optimization software, unlike other types of formulations possibly involving the use of exponentially sized sets of variables or constraints. The main part of the paper focuses on compact formulations for eliminating solutions with less than cycles. Such formulations are less well known and studied than formulations which prevent solutions with more than cycles. The proposed formulations are based on a common motivation, that is, the formulations contain variables that assign labels to nodes, and prevent less than cycles by stating that different depots must have different labels and that nodes in the same cycle must have the same label. We introduce and study aggregated formulations (which consider integer variables that represent the label of the node) and disaggregated formulations (which consider binary variables that assign each node to a given label). The aggregated models are new. The disaggregated formulations are not, although in all of them new enhancements have been included to make them more competitive with the aggregated models. The two main conclusions of this study are: (i) in the context of compact formulations, it is worth looking at the models with integer node variables, which have a smaller size. Despite their weaker LP relaxation bounds, the fewer variables and constraints lead to faster integer resolution, especially when solving instances with more than 50 nodes; (ii) the best of our compact models exhibit a performance that, overall, is comparable to that of the best methods known for the HMP (including branch‐and‐cut algorithms), solving to optimality instances with up to 226 nodes within 1 h. This corroborates our message that the knowledge of the inequalities for preventing less than cycles is much less well understood.
{"title":"Node based compact formulations for the Hamiltonian <i>p</i>‐median problem","authors":"Michele Barbato, Francisco Canas, Luís Gouveia, Pierre Pesneau","doi":"10.1002/net.22163","DOIUrl":"https://doi.org/10.1002/net.22163","url":null,"abstract":"Abstract In this paper, we introduce, study and analyze several classes of compact formulations for the symmetric Hamiltonian ‐median problem (HMP). Given a positive integer and a weighted complete undirected graph with weights on the edges, the HMP on is to find a minimum weight set of elementary cycles partitioning the vertices of . The advantage of developing compact formulations is that they can be readily used in combination with off‐the‐shelf optimization software, unlike other types of formulations possibly involving the use of exponentially sized sets of variables or constraints. The main part of the paper focuses on compact formulations for eliminating solutions with less than cycles. Such formulations are less well known and studied than formulations which prevent solutions with more than cycles. The proposed formulations are based on a common motivation, that is, the formulations contain variables that assign labels to nodes, and prevent less than cycles by stating that different depots must have different labels and that nodes in the same cycle must have the same label. We introduce and study aggregated formulations (which consider integer variables that represent the label of the node) and disaggregated formulations (which consider binary variables that assign each node to a given label). The aggregated models are new. The disaggregated formulations are not, although in all of them new enhancements have been included to make them more competitive with the aggregated models. The two main conclusions of this study are: (i) in the context of compact formulations, it is worth looking at the models with integer node variables, which have a smaller size. Despite their weaker LP relaxation bounds, the fewer variables and constraints lead to faster integer resolution, especially when solving instances with more than 50 nodes; (ii) the best of our compact models exhibit a performance that, overall, is comparable to that of the best methods known for the HMP (including branch‐and‐cut algorithms), solving to optimality instances with up to 226 nodes within 1 h. This corroborates our message that the knowledge of the inequalities for preventing less than cycles is much less well understood.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135100711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common model of robustness of a graph against random failures has all vertices operational, but the edges independently operational with probability p$$ p $$ . One can ask for the probability that all vertices can communicate (all‐terminal reliability) or that two specific vertices (or terminals) can communicate with each other (two‐terminal reliability). A relatively new measure is split reliability, where for two fixed vertices s$$ s $$ and t$$ t $$ , we consider the probability that every vertex communicates with one of s$$ s $$ or t$$ t $$ , but not both. In this article, we explore the existence for fixed numbers n≥2$$ nge 2 $$ and m≥n−1$$ mge n-1 $$ of an optimal connected (n,m)$$ left(n,mright) $$ ‐graph Gn,m$$ {G}_{n,m} $$ for split reliability, that is, a connected graph with n$$ n $$ vertices and m$$ m $$ edges for which for any other such graph H$$ H $$ , the split reliability of Gn,m$$ {G}_{n,m} $$ is at least as large as that of H$$ H $$ , for all values of p∈[0,1]$$ pin left[0,1right] $$ . Unlike the similar problems for all‐terminal and two‐terminal reliability, where only partial results are known, we completely solve the issue for split reliability, where we show that there is an optimal (n,m)$$ left(n,mright) $$ ‐graph for split reliability if and only if n≤3$$ nle 3 $$ , m=n−1$$ m=n-1 $$ , or n=m=4$$ n=m=4 $$ .
{"title":"On the split reliability of graphs","authors":"Jason I. Brown, Isaac McMullin","doi":"10.1002/net.22166","DOIUrl":"https://doi.org/10.1002/net.22166","url":null,"abstract":"A common model of robustness of a graph against random failures has all vertices operational, but the edges independently operational with probability p$$ p $$ . One can ask for the probability that all vertices can communicate (all‐terminal reliability) or that two specific vertices (or terminals) can communicate with each other (two‐terminal reliability). A relatively new measure is split reliability, where for two fixed vertices s$$ s $$ and t$$ t $$ , we consider the probability that every vertex communicates with one of s$$ s $$ or t$$ t $$ , but not both. In this article, we explore the existence for fixed numbers n≥2$$ nge 2 $$ and m≥n−1$$ mge n-1 $$ of an optimal connected (n,m)$$ left(n,mright) $$ ‐graph Gn,m$$ {G}_{n,m} $$ for split reliability, that is, a connected graph with n$$ n $$ vertices and m$$ m $$ edges for which for any other such graph H$$ H $$ , the split reliability of Gn,m$$ {G}_{n,m} $$ is at least as large as that of H$$ H $$ , for all values of p∈[0,1]$$ pin left[0,1right] $$ . Unlike the similar problems for all‐terminal and two‐terminal reliability, where only partial results are known, we completely solve the issue for split reliability, where we show that there is an optimal (n,m)$$ left(n,mright) $$ ‐graph for split reliability if and only if n≤3$$ nle 3 $$ , m=n−1$$ m=n-1 $$ , or n=m=4$$ n=m=4 $$ .","PeriodicalId":54734,"journal":{"name":"Networks","volume":"82 1","pages":"177 - 185"},"PeriodicalIF":2.1,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44441840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xabier A. Martin, Javier Panadero, David Peidro, E. Pérez-Bernabeu, A. Juan
Stochastic, as well as fuzzy uncertainty, can be found in most real‐world systems. Considering both types of uncertainties simultaneously makes optimization problems incredibly challenging. In this paper we propose a fuzzy simheuristic to solve the Time Capacitated Arc Routing Problem (TCARP) when the nature of the travel time can either be deterministic, stochastic or fuzzy. The main goal is to find a solution (vehicle routes) that minimizes the total time spent in servicing the required arcs. However, due to uncertainty, other characteristics of the solution are also considered. In particular, we illustrate how reliability concepts can enrich the probabilistic information given to decision‐makers. In order to solve the aforementioned optimization problem, we extend the concept of simheuristic framework so it can also include fuzzy elements. Hence, both stochastic and fuzzy uncertainty are simultaneously incorporated into the CARP. In order to test our approach, classical CARP instances have been adapted and extended so that customers' demands become either stochastic or fuzzy. The experimental results show the effectiveness of the proposed approach when compared with more traditional ones. In particular, our fuzzy simheuristic is capable of generating new best‐known solutions for the stochastic versions of some instances belonging to the tegl, tcarp, val, and rural benchmarks.
{"title":"Solving the time capacitated arc routing problem under fuzzy and stochastic travel and service times","authors":"Xabier A. Martin, Javier Panadero, David Peidro, E. Pérez-Bernabeu, A. Juan","doi":"10.1002/net.22159","DOIUrl":"https://doi.org/10.1002/net.22159","url":null,"abstract":"Stochastic, as well as fuzzy uncertainty, can be found in most real‐world systems. Considering both types of uncertainties simultaneously makes optimization problems incredibly challenging. In this paper we propose a fuzzy simheuristic to solve the Time Capacitated Arc Routing Problem (TCARP) when the nature of the travel time can either be deterministic, stochastic or fuzzy. The main goal is to find a solution (vehicle routes) that minimizes the total time spent in servicing the required arcs. However, due to uncertainty, other characteristics of the solution are also considered. In particular, we illustrate how reliability concepts can enrich the probabilistic information given to decision‐makers. In order to solve the aforementioned optimization problem, we extend the concept of simheuristic framework so it can also include fuzzy elements. Hence, both stochastic and fuzzy uncertainty are simultaneously incorporated into the CARP. In order to test our approach, classical CARP instances have been adapted and extended so that customers' demands become either stochastic or fuzzy. The experimental results show the effectiveness of the proposed approach when compared with more traditional ones. In particular, our fuzzy simheuristic is capable of generating new best‐known solutions for the stochastic versions of some instances belonging to the tegl, tcarp, val, and rural benchmarks.","PeriodicalId":54734,"journal":{"name":"Networks","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43117397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an approach to constructing metrics of network resilience, where resilience is understood as the network's amenability to restoring its optimal or near‐optimal operations subsequent to unforeseen (stochastic) disruptions of its topology or operational parameters, and illustrated it on the examples of the resilient maximum network flow problem and the resilient minimum cost network problem. Specifically, the network flows in these problems are designed for resilience against unpredictable losses of network carrying capacity, and the mechanism of attaining a degree of resilience is through preallocation of resources toward (at least partial) restoration of the capacities of the arcs. The obtained formulations of resilient network flow problems possess a number of useful properties, for example, similarly to the standard network flow problems, the network flow is integral if the arc capacities, costs, and so forth, are integral. It is also shown that the proposed formulations of resilient network flow problems can be viewed as “network measures of risk”, similar in properties and behavior to convex measures of risk. Efficient decomposition algorithms have been proposed for both the resilient maximum network flow problem and the resilient minimum cost network flow problem, and a study of the network flow resilience as a function of network's structure has been conducted on networks with three types of topology: that of uniform random graphs, scale‐free graphs, and grid graphs.
{"title":"Risk‐averse optimization and resilient network flows","authors":"Masoud Eshghali, P. Krokhmal","doi":"10.1002/net.22149","DOIUrl":"https://doi.org/10.1002/net.22149","url":null,"abstract":"We propose an approach to constructing metrics of network resilience, where resilience is understood as the network's amenability to restoring its optimal or near‐optimal operations subsequent to unforeseen (stochastic) disruptions of its topology or operational parameters, and illustrated it on the examples of the resilient maximum network flow problem and the resilient minimum cost network problem. Specifically, the network flows in these problems are designed for resilience against unpredictable losses of network carrying capacity, and the mechanism of attaining a degree of resilience is through preallocation of resources toward (at least partial) restoration of the capacities of the arcs. The obtained formulations of resilient network flow problems possess a number of useful properties, for example, similarly to the standard network flow problems, the network flow is integral if the arc capacities, costs, and so forth, are integral. It is also shown that the proposed formulations of resilient network flow problems can be viewed as “network measures of risk”, similar in properties and behavior to convex measures of risk. Efficient decomposition algorithms have been proposed for both the resilient maximum network flow problem and the resilient minimum cost network flow problem, and a study of the network flow resilience as a function of network's structure has been conducted on networks with three types of topology: that of uniform random graphs, scale‐free graphs, and grid graphs.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"82 1","pages":"129 - 152"},"PeriodicalIF":2.1,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42127826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In telecommunication networks, full connectivity resilience to multiple link failures is too costly as it requires a network topology with too many redundant links. Alternatively, the connectivity resilience of a telecommunications network can be improved by resorting to available third‐party networks for temporary additional connectivity until the failing links are restored. In this approach, some nodes of the network must be selected in advance to act as gateway nodes to the third‐party networks when a multiple link failure event occurs. For a given network topology and a cost associated with each node to turn it into a gateway node to each of the third‐party networks, the aim is to select the gateway nodes providing maximum connectivity resilience at minimum cost. The Gateway Node Selection is defined as a bi‐objective optimization problem such that its Pareto‐optimal solutions represent different trade‐offs between cost and connectivity resilience. In this work, the connectivity resilience is modeled by the Critical Link Detection optimization problem. An exact optimization algorithm is proposed, based on a row generation algorithm and on set cover cuts. The computational results demonstrate the effectiveness of the proposed algorithm on four well‐known telecommunication network topologies.
{"title":"Provision of maximum connectivity resiliency with minimum cost to telecommunication networks through third‐party networks","authors":"Fábio Barbosa, A. D. de Sousa, A. Agra","doi":"10.1002/net.22148","DOIUrl":"https://doi.org/10.1002/net.22148","url":null,"abstract":"In telecommunication networks, full connectivity resilience to multiple link failures is too costly as it requires a network topology with too many redundant links. Alternatively, the connectivity resilience of a telecommunications network can be improved by resorting to available third‐party networks for temporary additional connectivity until the failing links are restored. In this approach, some nodes of the network must be selected in advance to act as gateway nodes to the third‐party networks when a multiple link failure event occurs. For a given network topology and a cost associated with each node to turn it into a gateway node to each of the third‐party networks, the aim is to select the gateway nodes providing maximum connectivity resilience at minimum cost. The Gateway Node Selection is defined as a bi‐objective optimization problem such that its Pareto‐optimal solutions represent different trade‐offs between cost and connectivity resilience. In this work, the connectivity resilience is modeled by the Critical Link Detection optimization problem. An exact optimization algorithm is proposed, based on a row generation algorithm and on set cover cuts. The computational results demonstrate the effectiveness of the proposed algorithm on four well‐known telecommunication network topologies.","PeriodicalId":54734,"journal":{"name":"Networks","volume":"82 1","pages":"110 - 87"},"PeriodicalIF":2.1,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47506354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}