Pub Date : 2024-04-24DOI: 10.1016/j.ipl.2024.106497
Théodore Lopez, Benjamin Monmege, Jean-Marc Talbot
Recently, Jecker has introduced and studied the regular -length of a monoid, as the length of its longest chain of regular -classes. We use this parameter in order to improve the construction, originally proposed by Colcombet, of a deterministic automaton that allows to map a word to one of its forward Ramsey splits: these are a relaxation of factorisation forests that enjoy prefix stability, thus allowing a compositional construction. For certain monoids that have a small regular -length, our construction produces an exponentially more succinct deterministic automaton. Finally, we apply it to obtain better complexity result for the problem of fast infix evaluation.
最近,耶克尔(Jecker)引入并研究了单义体的正则 D 长度,即正则 D 类最长链的长度。我们利用这一参数改进了最初由科尔科姆贝特(Colcombet)提出的一种确定性自动机的构造,这种自动机可以将一个词映射到它的前向拉姆齐分裂中的一个:这些分裂是因式分解森林的一种放松,具有前缀稳定性,因此可以进行组合构造。对于某些具有较小规则 D 长度的单词,我们的构造会产生一种指数级的更简洁的确定性自动机。最后,我们运用它为快速下位数评估问题获得了更好的复杂性结果。
{"title":"Regular D-length: A tool for improved prefix-stable forward Ramsey factorisations","authors":"Théodore Lopez, Benjamin Monmege, Jean-Marc Talbot","doi":"10.1016/j.ipl.2024.106497","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106497","url":null,"abstract":"<div><p>Recently, Jecker has introduced and studied the regular <span><math><mi>D</mi></math></span>-length of a monoid, as the length of its longest chain of regular <span><math><mi>D</mi></math></span>-classes. We use this parameter in order to improve the construction, originally proposed by Colcombet, of a deterministic automaton that allows to map a word to one of its forward Ramsey splits: these are a relaxation of factorisation forests that enjoy prefix stability, thus allowing a compositional construction. For certain monoids that have a small regular <span><math><mi>D</mi></math></span>-length, our construction produces an exponentially more succinct deterministic automaton. Finally, we apply it to obtain better complexity result for the problem of fast infix evaluation.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"187 ","pages":"Article 106497"},"PeriodicalIF":0.5,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140647631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1016/j.ipl.2024.106496
Yu-Lun Wu, Hung-Lung Wang
Let A, B, and C be three matrices. We investigate the problem of verifying whether over the ring of integers and finding the correct product AB. Given that C is different from AB by at most k entries, we propose an algorithm that uses operations. Let α be the largest absolute value of an entry in A, B, and C. The integers involved in the computation are of .
{"title":"Correcting matrix products over the ring of integers","authors":"Yu-Lun Wu, Hung-Lung Wang","doi":"10.1016/j.ipl.2024.106496","DOIUrl":"10.1016/j.ipl.2024.106496","url":null,"abstract":"<div><p>Let <em>A</em>, <em>B</em>, and <em>C</em> be three <span><math><mi>n</mi><mo>×</mo><mi>n</mi></math></span> matrices. We investigate the problem of verifying whether <span><math><mi>A</mi><mi>B</mi><mo>=</mo><mi>C</mi></math></span> over the ring of integers and finding the correct product <em>AB</em>. Given that <em>C</em> is different from <em>AB</em> by at most <em>k</em> entries, we propose an algorithm that uses <span><math><mi>O</mi><mo>(</mo><msqrt><mrow><mi>k</mi></mrow></msqrt><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>+</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>n</mi><mo>)</mo></math></span> operations. Let <em>α</em> be the largest absolute value of an entry in <em>A</em>, <em>B</em>, and <em>C</em>. The integers involved in the computation are of <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>3</mn></mrow></msup><msup><mrow><mi>α</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span>.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106496"},"PeriodicalIF":0.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1016/j.ipl.2024.106495
Taekang Eom , Hee-Kap Ahn
We study the problem of computing the center of cycle graphs whose vertices are weighted. The distance from a vertex to a point of the graph is defined as the weight of the vertex times the length of the shortest path between the vertex and the point. The weighted center of the graph is a point of the graph such that the maximum distance of the vertices of the graph to the point is minimum among all points of the graph. We present an -time algorithm for the discrete and continuous weighted center problem on cycle graphs with n vertices. Our algorithm improves upon the best known algorithm that takes time. Moreover, it is optimal for the weighted center problem of cycle graphs.
我们研究的是计算顶点加权的循环图中心的问题。从顶点到图中某一点的距离定义为顶点的权重乘以顶点与该点间最短路径的长度。图的加权中心是图中的一个点,在图的所有点中,图顶点到该点的最大距离最小。我们针对有 n 个顶点的循环图上的离散和连续加权中心问题提出了一种 O(n)-time 算法。我们的算法改进了需要 O(nlogn) 时间的已知最佳算法。此外,它还是循环图加权中心问题的最佳算法。
{"title":"A linear-time algorithm for the center problem in weighted cycle graphs","authors":"Taekang Eom , Hee-Kap Ahn","doi":"10.1016/j.ipl.2024.106495","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106495","url":null,"abstract":"<div><p>We study the problem of computing the center of cycle graphs whose vertices are weighted. The distance from a vertex to a point of the graph is defined as the weight of the vertex times the length of the shortest path between the vertex and the point. The weighted center of the graph is a point of the graph such that the maximum distance of the vertices of the graph to the point is minimum among all points of the graph. We present an <span><math><mi>O</mi><mo>(</mo><mi>n</mi><mo>)</mo></math></span>-time algorithm for the discrete and continuous weighted center problem on cycle graphs with <em>n</em> vertices. Our algorithm improves upon the best known algorithm that takes <span><math><mi>O</mi><mo>(</mo><mi>n</mi><mi>log</mi><mo></mo><mi>n</mi><mo>)</mo></math></span> time. Moreover, it is optimal for the weighted center problem of cycle graphs.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106495"},"PeriodicalIF":0.5,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140540027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-24DOI: 10.1016/j.ipl.2024.106494
Feifei Yan , Pinhui Ke , Zuling Chang
Recently, a class of quaternary sequences with period pq, where p and q are two distinct odd primes introduced by Zhang et al. were proved to possess high linear complexity and 4-adic complexity. In this paper, we determine the autocorrelation distribution of this class of quaternary sequence. Our results indicate that the studied quaternary sequence are weak with respect to the correlation property.
最近,Zhang 等人提出的一类周期为 pq(其中 p 和 q 是两个不同的奇数素数)的四元数列被证明具有很高的线性复杂度和四元数列复杂度。本文测定了该类四元序列的自相关分布。我们的结果表明,所研究的四元数列的相关性较弱。
{"title":"The autocorrelation of a class of quaternary sequences of length pq with high complexity","authors":"Feifei Yan , Pinhui Ke , Zuling Chang","doi":"10.1016/j.ipl.2024.106494","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106494","url":null,"abstract":"<div><p>Recently, a class of quaternary sequences with period <em>pq</em>, where <em>p</em> and <em>q</em> are two distinct odd primes introduced by Zhang et al. were proved to possess high linear complexity and 4-adic complexity. In this paper, we determine the autocorrelation distribution of this class of quaternary sequence. Our results indicate that the studied quaternary sequence are weak with respect to the correlation property.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106494"},"PeriodicalIF":0.5,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140321256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-16DOI: 10.1016/j.ipl.2024.106492
Hao Wu , Qizhe Yang , Huan Long
The qCCS model proposed by Feng et al. provides a powerful framework to describe and reason about quantum communication systems that could be entangled with the environment. However, they only studied weak bisimulation semantics. In this paper we propose a new branching bisimilarity for qCCS and show that it is a congruence. The new bisimilarity is based on the concept of ϵ-tree and preserves the branching structure of concurrent processes where both quantum and classical components are allowed. Furthermore, we present a polynomial time equivalence checking algorithm for the ground version of our branching bisimilarity.
{"title":"Branching bisimulation semantics for quantum processes","authors":"Hao Wu , Qizhe Yang , Huan Long","doi":"10.1016/j.ipl.2024.106492","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106492","url":null,"abstract":"<div><p>The qCCS model proposed by Feng et al. provides a powerful framework to describe and reason about quantum communication systems that could be entangled with the environment. However, they only studied weak bisimulation semantics. In this paper we propose a new branching bisimilarity for qCCS and show that it is a congruence. The new bisimilarity is based on the concept of <em>ϵ</em>-tree and preserves the branching structure of concurrent processes where both quantum and classical components are allowed. Furthermore, we present a polynomial time equivalence checking algorithm for the ground version of our branching bisimilarity.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106492"},"PeriodicalIF":0.5,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-15DOI: 10.1016/j.ipl.2024.106493
Dekel Tsur
In this paper we consider two vertex deletion problems. In the Block Vertex Deletion problem, the input is a graph G and an integer k, and the goal is to decide whether there is a set of at most k vertices whose removal from G result in a block graph (a graph in which every biconnected component is a clique). In the Pathwidth One Vertex Deletion problem, the input is a graph G and an integer k, and the goal is to decide whether there is a set of at most k vertices whose removal from G result in a graph with pathwidth at most one. We give a kernel for Block Vertex Deletion with vertices and a kernel for Pathwidth One Vertex Deletion with vertices. Our results improve the previous -vertex kernel for Block Vertex Deletion (Agrawal et al., 2016 [1]) and the -vertex kernel for Pathwidth One Vertex Deletion (Cygan et al., 2012 [3]).
在本文中,我们考虑了两个顶点删除问题。在 "块顶点删除 "问题中,输入是一个图 G 和一个整数 k,目标是判断是否有一组顶点(最多 k 个)从 G 中删除后会形成一个块图(图中每个双连接的部分都是一个小块)。在路径宽度为一的顶点删除问题中,输入是一个图 G 和一个整数 k,目标是判断是否存在一组至多 k 个顶点,将其从 G 中删除后会得到一个路径宽度至多为一的图。我们给出了 O(k3) 个顶点的块顶点删除内核和 O(k2) 个顶点的路径宽度为一的顶点删除内核。我们的结果改进了之前用于块顶点删除的 O(k4)- 顶点内核(Agrawal 等人,2016 [1])和用于路径宽度一个顶点删除的 O(k3)- 顶点内核(Cygan 等人,2012 [3])。
{"title":"Smaller kernels for two vertex deletion problems","authors":"Dekel Tsur","doi":"10.1016/j.ipl.2024.106493","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106493","url":null,"abstract":"<div><p>In this paper we consider two vertex deletion problems. In the <span>Block Vertex Deletion</span> problem, the input is a graph <em>G</em> and an integer <em>k</em>, and the goal is to decide whether there is a set of at most <em>k</em> vertices whose removal from <em>G</em> result in a block graph (a graph in which every biconnected component is a clique). In the <span>Pathwidth One Vertex Deletion</span> problem, the input is a graph <em>G</em> and an integer <em>k</em>, and the goal is to decide whether there is a set of at most <em>k</em> vertices whose removal from <em>G</em> result in a graph with pathwidth at most one. We give a kernel for <span>Block Vertex Deletion</span> with <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span> vertices and a kernel for <span>Pathwidth One Vertex Deletion</span> with <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span> vertices. Our results improve the previous <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>4</mn></mrow></msup><mo>)</mo></math></span>-vertex kernel for <span>Block Vertex Deletion</span> (Agrawal et al., 2016 <span>[1]</span>) and the <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span>-vertex kernel for <span>Pathwidth One Vertex Deletion</span> (Cygan et al., 2012 <span>[3]</span>).</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106493"},"PeriodicalIF":0.5,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140160714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1016/j.ipl.2024.106491
Ashwin Jacob, Michał Włodarczyk, Meirav Zehavi
In the Longest-Detour problem, we look for an -path that is at least k vertices longer than a shortest one. We study the parameterized complexity of Longest-Detour when parameterized by k: this falls into the research paradigm of ‘parameterization above guarantee’. Whereas the problem is known to be fixed-parameter tractable (FPT) on undirected graphs, the status of Longest-Detour on directed graphs remains highly unclear: it is not even known to be solvable in polynomial time for . Recently, Fomin et al. made progress in this direction by showing that the problem is FPT on every class of directed graphs where the 3-Disjoint Paths problem is solvable in polynomial time. We improve upon their result by weakening this assumption: we show that only a polynomial-time algorithm for 2-Disjoint Paths is required.
{"title":"Long directed detours: Reduction to 2-Disjoint Paths","authors":"Ashwin Jacob, Michał Włodarczyk, Meirav Zehavi","doi":"10.1016/j.ipl.2024.106491","DOIUrl":"10.1016/j.ipl.2024.106491","url":null,"abstract":"<div><p>In the <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> problem, we look for an <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span>-path that is at least <em>k</em> vertices longer than a shortest one. We study the parameterized complexity of <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> when parameterized by <em>k</em>: this falls into the research paradigm of ‘parameterization above guarantee’. Whereas the problem is known to be fixed-parameter tractable (FPT) on undirected graphs, the status of <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> on directed graphs remains highly unclear: it is not even known to be solvable in polynomial time for <span><math><mi>k</mi><mo>=</mo><mn>1</mn></math></span>. Recently, Fomin et al. made progress in this direction by showing that the problem is FPT on every class of directed graphs where the <span>3-Disjoint Paths</span> problem is solvable in polynomial time. We improve upon their result by weakening this assumption: we show that only a polynomial-time algorithm for <span>2-Disjoint Paths</span> is required.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106491"},"PeriodicalIF":0.5,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140153962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.ipl.2024.106490
Bhisham Dev Verma , Rameshwar Pratap , Punit Pankaj Dubey
The seminal work of Charikar et al. [1] called Count-Sketch suggests a sketching algorithm for real-valued vectors that has been used in frequency estimation for data streams and pairwise inner product estimation for real-valued vectors etc. One of the major advantages of Count-Sketch over other similar sketching algorithms, such as random projection, is that its running time, as well as the sparsity of sketch, depends on the sparsity of the input. Therefore, sparse datasets enjoy space-efficient (sparse sketches) and faster running time. However, on dense datasets, these advantages of Count-Sketch might be negligible over other baselines. In this work, we address this challenge by suggesting a simple and effective approach that outputs (asymptotically) a sparser sketch than that obtained via Count-Sketch, and as a by-product, we also achieve a faster running time. Simultaneously, the quality of our estimate is closely approximate to that of Count-Sketch. For frequency estimation and pairwise inner product estimation problems, our proposal Sparse-Count-Sketch provides unbiased estimates. These estimations, however, have slightly higher variances than their respective estimates obtained via Count-Sketch. To address this issue, we present improved estimators for these problems based on maximum likelihood estimation (MLE) that offer smaller variances even w.r.t.Count-Sketch. We suggest a rigorous theoretical analysis of our proposal for frequency estimation for data streams and pairwise inner product estimation for real-valued vectors.
{"title":"Sparsifying Count Sketch","authors":"Bhisham Dev Verma , Rameshwar Pratap , Punit Pankaj Dubey","doi":"10.1016/j.ipl.2024.106490","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106490","url":null,"abstract":"<div><p>The seminal work of Charikar et al. <span>[1]</span> called <span>Count-Sketch</span> suggests a sketching algorithm for real-valued vectors that has been used in frequency estimation for data streams and pairwise inner product estimation for real-valued vectors etc. One of the major advantages of <span>Count-Sketch</span> over other similar sketching algorithms, such as random projection, is that its running time, as well as the sparsity of sketch, depends on the sparsity of the input. Therefore, sparse datasets enjoy space-efficient (sparse sketches) and faster running time. However, on dense datasets, these advantages of <span>Count-Sketch</span> might be negligible over other baselines. In this work, we address this challenge by suggesting a simple and effective approach that outputs (asymptotically) a sparser sketch than that obtained via <span>Count-Sketch</span>, and as a by-product, we also achieve a faster running time. Simultaneously, the quality of our estimate is closely approximate to that of <span>Count-Sketch</span>. For frequency estimation and pairwise inner product estimation problems, our proposal <span>Sparse-Count-Sketch</span> provides unbiased estimates. These estimations, however, have slightly higher variances than their respective estimates obtained via <span>Count-Sketch</span>. To address this issue, we present improved estimators for these problems based on maximum likelihood estimation (MLE) that offer smaller variances even <em>w.r.t.</em> <span>Count-Sketch</span>. We suggest a rigorous theoretical analysis of our proposal for frequency estimation for data streams and pairwise inner product estimation for real-valued vectors.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106490"},"PeriodicalIF":0.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.ipl.2024.106485
V.P. Abidha , Pradeesha Ashok
Given a universe of a finite set of red elements R, and a finite set of blue elements B and a family of subsets of , the Red Blue Set Cover problem is to find a subset of that covers all blue elements of B and minimum number of red elements from R.
We prove that the Red Blue Set Cover problem is NP-hard even when R and B respectively are sets of red and blue points in and the sets in are defined by axis−parallel lines i.e., every set is a maximal set of points with the same x or y coordinate.
We then study the parameterized complexity of a generalization of this problem, where is a set of points in and is a collection of set of axis−parallel hyperplanes in under different parameterizations, where d is a constant. For every parameter, we show that the problem is fixed-parameter tractable and also show the existence of a polynomial kernel. We further consider the Red Blue Set Cover problem for some special types of rectangles in .
给定一个由有限红色元素集 R 和有限蓝色元素集 B 组成的宇宙 U=R∪B,以及 U 的子集族 F,红蓝集合覆盖问题就是找到 F 的子集 F′,该子集覆盖 B 中的所有蓝色元素和 R 中的最少红色元素。我们证明,即使 R 和 B 分别是 IR2 中的红色点集和蓝色点集,且 F 中的集合是由轴平行线定义的,即每个集合都是相同 x 或 y 坐标的最大点集,红蓝集合覆盖问题也是 NP 难的、然后,我们研究了该问题的广义参数化复杂度,其中 U 是 IRd 中的点集,F 是 IRd 中轴平行超平面集的集合。对于每个参数,我们都证明了该问题的固定参数可操作性,并证明了多项式内核的存在。我们进一步考虑了 IR2 中一些特殊类型矩形的红蓝集合覆盖问题。
{"title":"Red Blue Set Cover problem on axis-parallel hyperplanes and other objects","authors":"V.P. Abidha , Pradeesha Ashok","doi":"10.1016/j.ipl.2024.106485","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106485","url":null,"abstract":"<div><p>Given a universe <span><math><mi>U</mi><mo>=</mo><mi>R</mi><mo>∪</mo><mi>B</mi></math></span> of a finite set of red elements <em>R</em>, and a finite set of blue elements <em>B</em> and a family <span><math><mi>F</mi></math></span> of subsets of <span><math><mi>U</mi></math></span>, the <span>Red Blue Set Cover</span> problem is to find a subset <span><math><msup><mrow><mi>F</mi></mrow><mrow><mo>′</mo></mrow></msup></math></span> of <span><math><mi>F</mi></math></span> that covers all blue elements of <em>B</em> and minimum number of red elements from <em>R</em>.</p><p>We prove that the <span>Red Blue Set Cover</span> problem is NP-hard even when <em>R</em> and <em>B</em> respectively are sets of red and blue points in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> and the sets in <span><math><mi>F</mi></math></span> are defined by axis−parallel lines i.e., every set is a maximal set of points with the same <em>x</em> or <em>y</em> coordinate.</p><p>We then study the parameterized complexity of a generalization of this problem, where <span><math><mi>U</mi></math></span> is a set of points in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> and <span><math><mi>F</mi></math></span> is a collection of set of axis−parallel hyperplanes in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> under different parameterizations, where <em>d</em> is a constant. For every parameter, we show that the problem is fixed-parameter tractable and also show the existence of a polynomial kernel. We further consider the <span>Red Blue Set Cover</span> problem for some special types of rectangles in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106485"},"PeriodicalIF":0.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.ipl.2024.106484
Haibo Hong, Shi Bai, Fenghao Liu
With the development of Lie theory, Lie groups have profound significance in many branches of mathematics and physics. In Lie theory, matrix exponential plays a crucial role between Lie groups and Lie algebras. Meanwhile, as finite analogues of Lie groups, finite groups of Lie type also have wide application scenarios in mathematics and physics owning to their unique mathematical structures. In this context, it is meaningful to explore the potential applications of finite groups of Lie type in cryptography. In this paper, we firstly built the relationship between matrix exponential and discrete logarithmic problem (DLP) in finite groups of Lie type. Afterwards, we proved that the complexity of solving non-abelian factorization (NAF) problem is polynomial with the rank n of the finite group of Lie type. Furthermore, combining with the Algebraic Span, we proposed an efficient algorithm for solving group factorization problem (GFP) in finite groups of Lie type. Therefore, it's still an open problem to devise secure cryptosystems based on Lie theory.
{"title":"The group factorization problem in finite groups of Lie type","authors":"Haibo Hong, Shi Bai, Fenghao Liu","doi":"10.1016/j.ipl.2024.106484","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106484","url":null,"abstract":"<div><p>With the development of Lie theory, Lie groups have profound significance in many branches of mathematics and physics. In Lie theory, matrix exponential plays a crucial role between Lie groups and Lie algebras. Meanwhile, as finite analogues of Lie groups, finite groups of Lie type also have wide application scenarios in mathematics and physics owning to their unique mathematical structures. In this context, it is meaningful to explore the potential applications of finite groups of Lie type in cryptography. In this paper, we firstly built the relationship between matrix exponential and discrete logarithmic problem (DLP) in finite groups of Lie type. Afterwards, we proved that the complexity of solving non-abelian factorization (NAF) problem is polynomial with the rank <em>n</em> of the finite group of Lie type. Furthermore, combining with the Algebraic Span, we proposed an efficient algorithm for solving group factorization problem (GFP) in finite groups of Lie type. Therefore, it's still an open problem to devise secure cryptosystems based on Lie theory.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106484"},"PeriodicalIF":0.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140014410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}