Pub Date : 2024-09-21DOI: 10.1007/s10878-024-01204-z
Niels Grüttemeier, Philipp Heinrich Keßler, Christian Komusiewicz, Frank Sommer
In the Vertex Triangle 2-Club problem, we are given an undirected graph G and aim to find a maximum-vertex subgraph of G that has diameter at most 2 and in which every vertex is contained in at least (ell ) triangles in the subgraph. So far, the only algorithm for solving Vertex Triangle 2-Club relies on an ILP formulation (Almeida and Brás in Comput Oper Res 111:258–270, 2019). In this work, we develop a combinatorial branch-and-bound algorithm that, coupled with a set of data reduction rules, outperforms the existing implementation and is able to find optimal solutions on sparse real-world graphs with more than 100,000 vertices in a few minutes. We also extend our algorithm to the Edge Triangle 2-Club problem where the triangle constraint is imposed on all edges of the subgraph.
{"title":"Efficient branch-and-bound algorithms for finding triangle-constrained 2-clubs","authors":"Niels Grüttemeier, Philipp Heinrich Keßler, Christian Komusiewicz, Frank Sommer","doi":"10.1007/s10878-024-01204-z","DOIUrl":"https://doi.org/10.1007/s10878-024-01204-z","url":null,"abstract":"<p>In the <span>Vertex Triangle 2-Club</span> problem, we are given an undirected graph <i>G</i> and aim to find a maximum-vertex subgraph of <i>G</i> that has diameter at most 2 and in which every vertex is contained in at least <span>(ell )</span> triangles in the subgraph. So far, the only algorithm for solving <span>Vertex Triangle 2-Club</span> relies on an ILP formulation (Almeida and Brás in Comput Oper Res 111:258–270, 2019). In this work, we develop a combinatorial branch-and-bound algorithm that, coupled with a set of data reduction rules, outperforms the existing implementation and is able to find optimal solutions on sparse real-world graphs with more than 100,000 vertices in a few minutes. We also extend our algorithm to the <span>Edge Triangle 2-Club</span> problem where the triangle constraint is imposed on all edges of the subgraph.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"70 2 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142276008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the minmax regret 1-sink location problem on a dynamic flow path network with parametric weights. A dynamic flow path network consists of an undirected path with positive edge lengths, positive edge capacities, and nonnegative vertex weights. A path can be considered as a road, an edge length as the distance along the road, and a vertex weight as the number of people at the site. An edge capacity limits the number of people that can enter the edge per unit time. We consider the problem of locating a sink where all the people evacuate quickly. In our model, each weight is represented by a linear function of a common parameter t, and the decision maker who determines the sink location does not know the value of t. We formulate the problem under such uncertainty as the minmax regret problem. Given t and sink location x, the cost is the sum of arrival times at x for all the people determined by t. The regret for x under t is the gap between this cost and the optimal cost under t. The problem is to find the sink location minimizing the maximum regret over all t. For the problem, we propose an (O(n^4 2^{alpha (n)} alpha (n)^2 log n)) time algorithm, where n is the number of vertices in the network and (alpha (cdot )) is the inverse Ackermann function. Also, for the special case in which every edge has the same capacity, we show that the complexity can be reduced to (O(n^3 2^{alpha (n)} alpha (n) log n)).
本文解决的是具有参数权重的动态流动路径网络上的最小遗憾单汇定位问题。动态流动路径网络由具有正边长、正边容量和非负顶点权重的无向路径组成。路径可视为一条道路,边长可视为沿路的距离,顶点权重可视为该地点的人数。边的容量限制了单位时间内能进入边的人数。我们考虑的问题是找到一个汇集点,让所有的人都能快速撤离。在我们的模型中,每个权重都由一个共同参数 t 的线性函数来表示,而决定水槽位置的决策者并不知道 t 的值。给定 t 和水槽位置 x,成本是由 t 决定的所有人员到达 x 的时间之和。x 在 t 条件下的遗憾是该成本与 t 条件下最优成本之间的差距。对于这个问题,我们提出了一种耗时(O(n^4 2^{alpha (n)} alpha (n)^2 log n)的算法,其中 n 是网络中的顶点数,(alpha (cdot ))是反阿克曼函数。另外,对于每条边都有相同容量的特殊情况,我们证明复杂度可以降低到 (O(n^3 2^{alpha (n)} alpha (n) log n))。
{"title":"Minmax regret 1-sink location problems on dynamic flow path networks with parametric weights","authors":"Tetsuya Fujie, Yuya Higashikawa, Naoki Katoh, Junichi Teruyama, Yuki Tokuni","doi":"10.1007/s10878-024-01199-7","DOIUrl":"https://doi.org/10.1007/s10878-024-01199-7","url":null,"abstract":"<p>This paper addresses the minmax regret 1-sink location problem on a dynamic flow path network with parametric weights. A <i>dynamic flow path network</i> consists of an undirected path with positive edge lengths, positive edge capacities, and nonnegative vertex weights. A path can be considered as a road, an edge length as the distance along the road, and a vertex weight as the number of people at the site. An edge capacity limits the number of people that can enter the edge per unit time. We consider the problem of locating a <i>sink</i> where all the people evacuate quickly. In our model, each weight is represented by a linear function of a common parameter <i>t</i>, and the decision maker who determines the sink location does not know the value of <i>t</i>. We formulate the problem under such uncertainty as the <i>minmax regret problem</i>. Given <i>t</i> and sink location <i>x</i>, the cost is the sum of arrival times at <i>x</i> for all the people determined by <i>t</i>. The regret for <i>x</i> under <i>t</i> is the gap between this cost and the optimal cost under <i>t</i>. The problem is to find the sink location minimizing the maximum regret over all <i>t</i>. For the problem, we propose an <span>(O(n^4 2^{alpha (n)} alpha (n)^2 log n))</span> time algorithm, where <i>n</i> is the number of vertices in the network and <span>(alpha (cdot ))</span> is the inverse Ackermann function. Also, for the special case in which every edge has the same capacity, we show that the complexity can be reduced to <span>(O(n^3 2^{alpha (n)} alpha (n) log n))</span>.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"12 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142084848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1007/s10878-024-01202-1
Vladyslav Oles, Nathan Lemons, Alexander Panchenko
Gromov–Hausdorff distances measure shape difference between the objects representable as compact metric spaces, e.g. point clouds, manifolds, or graphs. Computing any Gromov–Hausdorff distance is equivalent to solving an NP-hard optimization problem, deeming the notion impractical for applications. In this paper we propose a polynomial algorithm for estimating the so-called modified Gromov–Hausdorff (mGH) distance, a relaxation of the standard Gromov–Hausdorff (GH) distance with similar topological properties. We implement the algorithm for the case of compact metric spaces induced by unweighted graphs as part of Python library scikit-tda, and demonstrate its performance on real-world and synthetic networks. The algorithm finds the mGH distances exactly on most graphs with the scale-free property. We use the computed mGH distances to successfully detect outliers in real-world social and computer networks.
{"title":"Efficient estimation of the modified Gromov–Hausdorff distance between unweighted graphs","authors":"Vladyslav Oles, Nathan Lemons, Alexander Panchenko","doi":"10.1007/s10878-024-01202-1","DOIUrl":"https://doi.org/10.1007/s10878-024-01202-1","url":null,"abstract":"<p>Gromov–Hausdorff distances measure shape difference between the objects representable as compact metric spaces, e.g. point clouds, manifolds, or graphs. Computing any Gromov–Hausdorff distance is equivalent to solving an NP-hard optimization problem, deeming the notion impractical for applications. In this paper we propose a polynomial algorithm for estimating the so-called modified Gromov–Hausdorff (mGH) distance, a relaxation of the standard Gromov–Hausdorff (GH) distance with similar topological properties. We implement the algorithm for the case of compact metric spaces induced by unweighted graphs as part of Python library <span>scikit-tda</span>, and demonstrate its performance on real-world and synthetic networks. The algorithm finds the mGH distances exactly on most graphs with the scale-free property. We use the computed mGH distances to successfully detect outliers in real-world social and computer networks.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"50 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142045389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1007/s10878-024-01185-z
Lijin Shaji, R. Suji Pramila
Software vulnerabilities are flaws that may be exploited to cause loss or harm. Various automated machine-learning techniques have been developed in preceding studies to detect software vulnerabilities. This work tries to develop a technique for securing the software on the basis of their vulnerabilities that are already known, by developing a hybrid deep learning model to detect those vulnerabilities. Moreover, certain countermeasures are suggested based on the types of vulnerability to prevent the attack further. For different software projects taken as the dataset, feature fusion is done by utilizing canonical correlation analysis together with Deep Residual Network (DRN). A hybrid deep learning technique trained using AdamW-Rat Swarm Optimizer (AdamW-RSO) is designed to detect software vulnerability. Hybrid deep learning makes use of the Deep Belief Network (DBN) and Generative Adversarial Network (GAN). For every vulnerability, its location of occurrence within the software development procedures and techniques of alleviation via implementation level or design level activities are described. Thus, it helps in understanding the appearance of vulnerabilities, suggesting the use of various countermeasures during the initial phases of software design, and therefore, assures software security. Evaluating the performance of vulnerability detection by the proposed technique regarding recall, precision, and f-measure, it is found to be more effective than the existing methods.
{"title":"Meta-heuristic-based hybrid deep learning model for vulnerability detection and prevention in software system","authors":"Lijin Shaji, R. Suji Pramila","doi":"10.1007/s10878-024-01185-z","DOIUrl":"https://doi.org/10.1007/s10878-024-01185-z","url":null,"abstract":"<p>Software vulnerabilities are flaws that may be exploited to cause loss or harm. Various automated machine-learning techniques have been developed in preceding studies to detect software vulnerabilities. This work tries to develop a technique for securing the software on the basis of their vulnerabilities that are already known, by developing a hybrid deep learning model to detect those vulnerabilities. Moreover, certain countermeasures are suggested based on the types of vulnerability to prevent the attack further. For different software projects taken as the dataset, feature fusion is done by utilizing canonical correlation analysis together with Deep Residual Network (DRN). A hybrid deep learning technique trained using AdamW-Rat Swarm Optimizer (AdamW-RSO) is designed to detect software vulnerability. Hybrid deep learning makes use of the Deep Belief Network (DBN) and Generative Adversarial Network (GAN). For every vulnerability, its location of occurrence within the software development procedures and techniques of alleviation via implementation level or design level activities are described. Thus, it helps in understanding the appearance of vulnerabilities, suggesting the use of various countermeasures during the initial phases of software design, and therefore, assures software security. Evaluating the performance of vulnerability detection by the proposed technique regarding recall, precision, and f-measure, it is found to be more effective than the existing methods.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"10 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the prize-collecting single machine scheduling with bounds and penalties (PC-SMS-BP). In this problem, a set of n jobs and a single machine are considered, where each job (J_j) has a processing time (p_{j}), a profit (pi _{j}) and a rejection penalty (w_{j}). The upper bound on the processing number is U. The objective of this study is to find a feasible schedule that minimizes the makespan of the accepted jobs and the total rejection penalty of the rejected jobs under the condition that the number of the accepted jobs does not exceed a given threshold U while the total profit of the accepted jobs does not fall below a specified profit bound (varPi ). We first demonstrate that this problem is NP-hard. Then, a pseudo-polynomial time dynamic programming algorithm and a fully polynomial time approximation scheme (FPTAS) are proposed. Finally, numerical experiments are conducted to compare the effectiveness of the two proposed algorithms.
{"title":"The prize-collecting single machine scheduling with bounds and penalties","authors":"Guojun Hu, Pengxiang Pan, Suding Liu, Ping Yang, Runtao Xie","doi":"10.1007/s10878-024-01203-0","DOIUrl":"https://doi.org/10.1007/s10878-024-01203-0","url":null,"abstract":"<p>This study investigates the prize-collecting single machine scheduling with bounds and penalties (PC-SMS-BP). In this problem, a set of <i>n</i> jobs and a single machine are considered, where each job <span>(J_j)</span> has a processing time <span>(p_{j})</span>, a profit <span>(pi _{j})</span> and a rejection penalty <span>(w_{j})</span>. The upper bound on the processing number is <i>U</i>. The objective of this study is to find a feasible schedule that minimizes the makespan of the accepted jobs and the total rejection penalty of the rejected jobs under the condition that the number of the accepted jobs does not exceed a given threshold <i>U</i> while the total profit of the accepted jobs does not fall below a specified profit bound <span>(varPi )</span>. We first demonstrate that this problem is <i>NP</i>-hard. Then, a pseudo-polynomial time dynamic programming algorithm and a fully polynomial time approximation scheme (FPTAS) are proposed. Finally, numerical experiments are conducted to compare the effectiveness of the two proposed algorithms.\u0000</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"25 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141994521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s10878-024-01200-3
Juhi Chaudhary, Sounaka Mishra, B. S. Panda
Low-Acy-Matching asks to find a maximal matching M in a given graph G of minimum cardinality such that the set of M-saturated vertices induces an acyclic subgraph in G. The decision version of Low-Acy-Matching is known to be ({textsf{NP}})-complete. In this paper, we strengthen this result by proving that the decision version of Low-Acy-Matching remains ({textsf{NP}})-complete for bipartite graphs with maximum degree 6 and planar perfect elimination bipartite graphs. We also show the hardness difference between Low-Acy-Matching and Max-Acy-Matching. Furthermore, we prove that, even for bipartite graphs, Low-Acy-Matching cannot be approximated within a ratio of (n^{1-epsilon }) for any (epsilon >0) unless ({textsf{P}}={textsf{NP}}). Finally, we establish that Low-Acy-Matching exhibits (textsf{APX})-hardness when restricted to 4-regular graphs.
{"title":"On the complexity of minimum maximal acyclic matchings","authors":"Juhi Chaudhary, Sounaka Mishra, B. S. Panda","doi":"10.1007/s10878-024-01200-3","DOIUrl":"https://doi.org/10.1007/s10878-024-01200-3","url":null,"abstract":"<p><span>Low-Acy-Matching</span> asks to find a maximal matching <i>M</i> in a given graph <i>G</i> of minimum cardinality such that the set of <i>M</i>-saturated vertices induces an acyclic subgraph in <i>G</i>. The decision version of <span>Low-Acy-Matching</span> is known to be <span>({textsf{NP}})</span>-complete. In this paper, we strengthen this result by proving that the decision version of <span>Low-Acy-Matching</span> remains <span>({textsf{NP}})</span>-complete for bipartite graphs with maximum degree 6 and planar perfect elimination bipartite graphs. We also show the hardness difference between <span>Low-Acy-Matching</span> and <span>Max-Acy-Matching</span>. Furthermore, we prove that, even for bipartite graphs, <span>Low-Acy-Matching</span> cannot be approximated within a ratio of <span>(n^{1-epsilon })</span> for any <span>(epsilon >0)</span> unless <span>({textsf{P}}={textsf{NP}})</span>. Finally, we establish that <span>Low-Acy-Matching</span> exhibits <span>(textsf{APX})</span>-hardness when restricted to 4-regular graphs.\u0000</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"190 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s10878-024-01197-9
R. Gómez, F. K. Miyazawa, Y. Wakababayashi
Let G be a connected graph and (t ge 1) a (rational) constant. A t-spanner of G is a spanning subgraph of G in which the distance between any pair of vertices is at most t times its distance in G. We address two problems on spanners. The first one, known as the minimumt-spanner problem (MinS(_t)), seeks in a connected graph a t-spanner with the smallest possible number of edges. In the second one, called minimum cost treet-spanner problem (MCTS(_t)), the input graph has costs assigned to its edges and seeks a t-spanner that is a tree with minimum cost. It is an optimization version of the treet-spanner problem (TreeS(_t)), a decision problem concerning the existence of a t-spanner that is a tree. MinS(_t) is known to be ({textsc {NP}})-hard for every (t ge 2). On the other hand, TreeS(_t) admits a polynomial-time algorithm for (t le 2) and is ({textsc {NP}})-complete for (t ge 4); but its complexity for (t=3) remains open. We focus on the class of subcubic graphs. First, we show that for such graphs MinS(_3) can be solved in polynomial time. These results yield a practical polynomial algorithm for TreeS(_3) that is of a combinatorial nature. We also show that MCTS(_2) can be solved in polynomial time. To obtain this last result, we prove a complete linear characterization of the polytope defined by the incidence vectors of the tree 2-spanners of a subcubic graph. A recent result showing that MinS(_3) on graphs with maximum degree at most 5 is NP-hard, together with the current result on subcubic graphs, leaves open only the complexity of MinS(_3) on graphs with maximum degree 4.
让 G 是一个连通图,(t)是一个(有理)常数。G 的一个 t 跨子图是 G 的一个跨子图,其中任意一对顶点之间的距离最多是它在 G 中距离的 t 倍。第一个问题被称为最小跨度问题(MinS (_t)),它在一个连通图中寻找一个边数尽可能少的跨度。第二个问题被称为最小成本树 t-spanner 问题(MCTS (_t)),输入图的边都有成本,寻求的 t-spanner 是成本最小的树。它是树 t-spanner 问题(TreeS (_t))的优化版本,是一个关于是否存在树 t-spanner 的决策问题。众所周知,MinS (_t )对于每一个 t (ge 2 )来说都是({textsc {NP}})困难的。另一方面,TreeS (_t)对于 (t le 2) 允许一个多项式时间算法,并且对于 (t ge 4) 是 ({textsc {NP}})-complete 的;但是它对于 (t=3) 的复杂性仍然是未知的。我们将重点放在亚立方图类上。首先,我们证明对于这类图,MinS (_3)可以在多项式时间内求解。这些结果为 TreeS (_3)提供了一种具有组合性质的实用多项式算法。我们还证明了 MCTS (_2)可以在多项式时间内求解。为了得到最后一个结果,我们证明了由亚立方体图的树 2-spanners 的入射向量定义的多面体的完整线性特征。最近的一个结果表明,在最大度最多为 5 的图上 MinS (_3) 是 NP 难的,加上目前关于亚立方图的结果,只剩下最大度为 4 的图上 MinS (_3) 的复杂性还没有解决。
{"title":"Polynomial algorithms for sparse spanners on subcubic graphs","authors":"R. Gómez, F. K. Miyazawa, Y. Wakababayashi","doi":"10.1007/s10878-024-01197-9","DOIUrl":"https://doi.org/10.1007/s10878-024-01197-9","url":null,"abstract":"<p>Let <i>G</i> be a connected graph and <span>(t ge 1)</span> a (rational) constant. A <i>t</i>-<i>spanner</i> of <i>G</i> is a spanning subgraph of <i>G</i> in which the distance between any pair of vertices is at most <i>t</i> times its distance in <i>G</i>. We address two problems on spanners. The first one, known as the <i>minimum</i> <i>t</i>-<i>spanner problem</i> (<span>MinS</span> <span>(_t)</span>), seeks in a connected graph a <i>t</i>-spanner with the smallest possible number of edges. In the second one, called <i>minimum cost tree</i> <i>t</i>-<i>spanner problem</i> (<span>MCTS</span> <span>(_t)</span>), the input graph has costs assigned to its edges and seeks a <i>t</i>-spanner that is a tree with minimum cost. It is an optimization version of the <i>tree</i> <i>t</i>-<i>spanner problem</i> (<span>TreeS</span> <span>(_t)</span>), a decision problem concerning the existence of a <i>t</i>-spanner that is a tree. <span>MinS</span> <span>(_t)</span> is known to be <span>({textsc {NP}})</span>-hard for every <span>(t ge 2)</span>. On the other hand, <span>TreeS</span> <span>(_t)</span> admits a polynomial-time algorithm for <span>(t le 2)</span> and is <span>({textsc {NP}})</span>-complete for <span>(t ge 4)</span>; but its complexity for <span>(t=3)</span> remains open. We focus on the class of subcubic graphs. First, we show that for such graphs <span>MinS</span> <span>(_3)</span> can be solved in polynomial time. These results yield a practical polynomial algorithm for <span>TreeS</span> <span>(_3)</span> that is of a combinatorial nature. We also show that <span>MCTS</span> <span>(_2)</span> can be solved in polynomial time. To obtain this last result, we prove a complete linear characterization of the polytope defined by the incidence vectors of the tree 2-spanners of a subcubic graph. A recent result showing that <span>MinS</span> <span>(_3)</span> on graphs with maximum degree at most 5 is NP-hard, together with the current result on subcubic graphs, leaves open only the complexity of <span>MinS</span> <span>(_3)</span> on graphs with maximum degree 4.\u0000</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"299 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s10878-024-01194-y
Marc Demange, Marcel A. Haddad, Cécile Murat
The Probabilistic p-Center problem under Pressure (Min PpCP) is a variant of the usual Minp-Center problem we recently introduced in the context of wildfire management. The problem is to locate p shelters minimizing the maximum distance people will have to cover in case of fire in order to reach the closest accessible shelter. The landscape is divided into zones and is modeled as an edge-weighted graph with vertices corresponding to zones and edges corresponding to direct connections between two adjacent zones. The risk associated with fire outbreaks is modeled using a finite set of fire scenarios. Each scenario corresponds to a fire outbreak on a single zone (i.e., on a vertex) with the main consequence of modifying evacuation paths in two ways. First, an evacuation path cannot pass through the vertex on fire. Second, the fact that someone close to the fire may not take rational decisions when selecting a direction to escape is modeled using new kinds of evacuation paths. In this paper, we characterize the set of feasible solutions of Min PpCP-instance. Then, we propose some approximation results for Min PpCP. These results require approximation results for two variants of the (deterministic) Minp-Center problem called Min MACp-Center and Min Partialp-Center.
压力下的概率 p 中心问题(Min P p CP)是我们最近在野火管理中引入的普通 Min p 中心问题的一个变体。问题是如何确定 p 个避难所的位置,使人们在发生火灾时到达最近的避难所所需的最大距离最小化。地形被划分为多个区域,并被建模为一个边加权图,图中顶点与区域相对应,边与相邻两个区域之间的直接连接相对应。与火灾爆发相关的风险是通过一组有限的火灾场景来模拟的。每种情况都对应一个区域(即一个顶点)爆发火灾,其主要后果是以两种方式改变疏散路径。首先,疏散路径不能经过着火顶点。其次,在选择逃生方向时,靠近火场的人可能不会做出理性的决定,这就需要使用新型疏散路径来模拟这一情况。本文描述了 Min P p CP-instance 的可行解集。然后,我们提出了 Min P p CP 的一些近似结果。这些结果需要(确定性)最小 p 中心问题的两个变体的近似结果,即最小 MAC p 中心和最小部分 p 中心。
{"title":"Approximating the probabilistic p-Center problem under pressure","authors":"Marc Demange, Marcel A. Haddad, Cécile Murat","doi":"10.1007/s10878-024-01194-y","DOIUrl":"https://doi.org/10.1007/s10878-024-01194-y","url":null,"abstract":"<p>The Probabilistic <i>p</i>-Center problem under Pressure (<span>Min P</span> <i>p</i> <span>CP</span>) is a variant of the usual <span>Min</span> <i>p</i><span>-Center</span> problem we recently introduced in the context of wildfire management. The problem is to locate <i>p</i> shelters minimizing the maximum distance people will have to cover in case of fire in order to reach the closest accessible shelter. The landscape is divided into zones and is modeled as an edge-weighted graph with vertices corresponding to zones and edges corresponding to direct connections between two adjacent zones. The risk associated with fire outbreaks is modeled using a finite set of fire scenarios. Each scenario corresponds to a fire outbreak on a single zone (i.e., on a vertex) with the main consequence of modifying evacuation paths in two ways. First, an evacuation path cannot pass through the vertex on fire. Second, the fact that someone close to the fire may not take rational decisions when selecting a direction to escape is modeled using new kinds of evacuation paths. In this paper, we characterize the set of feasible solutions of <span>Min P</span> <i>p</i> <span>CP</span>-instance. Then, we propose some approximation results for <span>Min P</span> <i>p</i> <span>CP</span>. These results require approximation results for two variants of the (deterministic) <span>Min</span> <i>p</i><span>-Center</span> problem called <span>Min MAC</span> <i>p</i><span>-Center</span> and <span>Min Partial</span> <i>p</i><span>-Center</span>.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"35 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s10878-024-01201-2
Mateus Martin, Horacio Hideki Yanasse, Maristela O. Santos, Reinaldo Morabito
In this paper, we address an extension of the classical two-dimensional bin packing (2BPP) that considers the spread of customer orders (2BPP-OS). The 2BPP-OS addresses a set of rectangular items, required from different customer orders, to be cut from a set of rectangular bins. All the items of a customer order are dispatched together to the next stage of production or distribution after its completion. The objective is to minimize the number of bins used and the spread of customer orders over the cutting process. The 2BPP-OS gains relevance in manufacturing environments that seek minimum waste solutions with satisfactory levels of customer service. We propose integer linear programming (ILP) models for variants of the 2BPP-OS that consider non-guillotine, 2-stage, restricted 3-stage, and unrestricted 3-stage patterns. We are not aware of integrated approaches for the 2BPP-OS in the literature despite its relevance in practical settings. Using a general-purpose ILP solver, the results show that the 2BPP-OS takes more computational effort to solve than the 2BPP, as it has to consider several symmetries that are often disregarded by the traditional 2BPP approaches. The solutions obtained by the proposed approaches have similar bin usage and significantly better metrics of customer satisfaction concerning the approaches that neglect the customer order spread.
{"title":"Models for two-dimensional bin packing problems with customer order spread","authors":"Mateus Martin, Horacio Hideki Yanasse, Maristela O. Santos, Reinaldo Morabito","doi":"10.1007/s10878-024-01201-2","DOIUrl":"https://doi.org/10.1007/s10878-024-01201-2","url":null,"abstract":"<p>In this paper, we address an extension of the classical two-dimensional bin packing (2BPP) that considers the spread of customer orders (2BPP-OS). The 2BPP-OS addresses a set of rectangular items, required from different customer orders, to be cut from a set of rectangular bins. All the items of a customer order are dispatched together to the next stage of production or distribution after its completion. The objective is to minimize the number of bins used and the spread of customer orders over the cutting process. The 2BPP-OS gains relevance in manufacturing environments that seek minimum waste solutions with satisfactory levels of customer service. We propose integer linear programming (ILP) models for variants of the 2BPP-OS that consider non-guillotine, 2-stage, restricted 3-stage, and unrestricted 3-stage patterns. We are not aware of integrated approaches for the 2BPP-OS in the literature despite its relevance in practical settings. Using a general-purpose ILP solver, the results show that the 2BPP-OS takes more computational effort to solve than the 2BPP, as it has to consider several symmetries that are often disregarded by the traditional 2BPP approaches. The solutions obtained by the proposed approaches have similar bin usage and significantly better metrics of customer satisfaction concerning the approaches that neglect the customer order spread.\u0000</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"52 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1007/s10878-024-01196-w
Fatemeh Ehsani, Monireh Hosseini
With the advancement of electronic service platforms, customers exhibit various purchasing behaviors. Given the extensive array of options and minimal exit barriers, customer migration from one digital service to another has become a common challenge for businesses. Customer churn prediction (CCP) emerges as a crucial marketing strategy aimed at estimating the likelihood of customer abandonment. In this paper, we aim to predict customer churn intentions using a novel robust meta-classifier. We utilized three distinct datasets: transaction, telecommunication, and customer churn datasets. Employing Decision Tree, Random Forest, XGBoost, AdaBoost, and Extra Trees as the five base supervised classifiers on these three datasets, we conducted cross-validation and evaluation setups separately. Additionally, we employed permutation and SelectKBest feature selection to rank the most practical features for achieving the highest accuracy. Furthermore, we utilized BayesSearchCV and GridSearchCV to discover, optimize, and tune the hyperparameters. Subsequently, we applied the refined classifiers in a funnel of a new meta-classifier for each dataset individually. The experimental results indicate that our proposed meta-classifier demonstrates superior accuracy compared to conventional classifiers and even stacking ensemble methods. The predictive outcomes serve as a valuable tool for businesses in identifying potential churners and taking proactive measures to retain customers, thereby enhancing customer retention rates and ensuring business sustainability.
随着电子服务平台的发展,客户表现出多种多样的购买行为。由于选择繁多且退出障碍极小,客户从一种数字服务迁移到另一种数字服务已成为企业面临的共同挑战。客户流失预测(CCP)作为一种重要的营销策略应运而生,旨在估计客户放弃的可能性。在本文中,我们旨在使用一种新型稳健元分类器来预测客户流失意向。我们利用了三个不同的数据集:交易数据集、电信数据集和客户流失数据集。我们在这三个数据集上使用了决策树、随机森林、XGBoost、AdaBoost 和 Extra Trees 作为五个基础监督分类器,并分别进行了交叉验证和评估设置。此外,我们还使用了 permutation 和 SelectKBest 特征选择来排列最实用的特征,以获得最高的准确率。此外,我们还利用 BayesSearchCV 和 GridSearchCV 来发现、优化和调整超参数。随后,我们将改进后的分类器分别应用于每个数据集的新元分类器漏斗中。实验结果表明,与传统分类器甚至堆叠集合方法相比,我们提出的元分类器具有更高的准确性。预测结果可作为企业识别潜在客户流失和采取积极措施留住客户的宝贵工具,从而提高客户保留率,确保企业的可持续发展。
{"title":"Customer churn prediction using a novel meta-classifier: an investigation on transaction, Telecommunication and customer churn datasets","authors":"Fatemeh Ehsani, Monireh Hosseini","doi":"10.1007/s10878-024-01196-w","DOIUrl":"https://doi.org/10.1007/s10878-024-01196-w","url":null,"abstract":"<p>With the advancement of electronic service platforms, customers exhibit various purchasing behaviors. Given the extensive array of options and minimal exit barriers, customer migration from one digital service to another has become a common challenge for businesses. Customer churn prediction (CCP) emerges as a crucial marketing strategy aimed at estimating the likelihood of customer abandonment. In this paper, we aim to predict customer churn intentions using a novel robust meta-classifier. We utilized three distinct datasets: transaction, telecommunication, and customer churn datasets. Employing Decision Tree, Random Forest, XGBoost, AdaBoost, and Extra Trees as the five base supervised classifiers on these three datasets, we conducted cross-validation and evaluation setups separately. Additionally, we employed permutation and SelectKBest feature selection to rank the most practical features for achieving the highest accuracy. Furthermore, we utilized BayesSearchCV and GridSearchCV to discover, optimize, and tune the hyperparameters. Subsequently, we applied the refined classifiers in a funnel of a new meta-classifier for each dataset individually. The experimental results indicate that our proposed meta-classifier demonstrates superior accuracy compared to conventional classifiers and even stacking ensemble methods. The predictive outcomes serve as a valuable tool for businesses in identifying potential churners and taking proactive measures to retain customers, thereby enhancing customer retention rates and ensuring business sustainability.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"215 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}