Pub Date : 2025-12-07DOI: 10.1016/j.tcs.2025.115685
Taekang Eom , Seungjun Lee , Hee-Kap Ahn
Given a convex polygon with k vertices and a polygonal domain consisting of polygonal obstacles with n vertices in total in the plane, we study the optimization problem of finding a largest similar copy of the polygon that can be placed in the polygonal domain without intersecting the obstacles. We present an upper bound O(k2n2λ4(k)) on the number of combinatorial changes occurring to the underlying structure during the rotation of the polygon, together with an O(k2n2λ4(k)log n)-time deterministic algorithm for the problem, where λs(n) is the length of the longest Davenport–Schinzel sequence of order s including n distinct symbols. This improves upon the previously best known results by Chew and Kedem [SoCG89, CGTA93] and Sharir and Toledo [SoCG91, CGTA94] on the problem in more than 27 years. Our result also improves the time complexity of the high-clearance motion planning algorithm by Chew and Kedem.
{"title":"Largest similar copies of convex polygons in polygonal domains","authors":"Taekang Eom , Seungjun Lee , Hee-Kap Ahn","doi":"10.1016/j.tcs.2025.115685","DOIUrl":"10.1016/j.tcs.2025.115685","url":null,"abstract":"<div><div>Given a convex polygon with <em>k</em> vertices and a polygonal domain consisting of polygonal obstacles with <em>n</em> vertices in total in the plane, we study the optimization problem of finding a largest similar copy of the polygon that can be placed in the polygonal domain without intersecting the obstacles. We present an upper bound <em>O</em>(<em>k</em><sup>2</sup><em>n</em><sup>2</sup><em>λ</em><sub>4</sub>(<em>k</em>)) on the number of combinatorial changes occurring to the underlying structure during the rotation of the polygon, together with an <em>O</em>(<em>k</em><sup>2</sup><em>n</em><sup>2</sup><em>λ</em><sub>4</sub>(<em>k</em>)log <em>n</em>)-time deterministic algorithm for the problem, where <em>λ<sub>s</sub></em>(<em>n</em>) is the length of the longest Davenport–Schinzel sequence of order <em>s</em> including <em>n</em> distinct symbols. This improves upon the previously best known results by Chew and Kedem [SoCG89, CGTA93] and Sharir and Toledo [SoCG91, CGTA94] on the problem in more than 27 years. Our result also improves the time complexity of the high-clearance motion planning algorithm by Chew and Kedem.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1063 ","pages":"Article 115685"},"PeriodicalIF":1.0,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-06DOI: 10.1016/j.tcs.2025.115680
Ke Wang , Haodong Jiang , Zhenfeng Zhang , Long Chen , Huiqin Xie
Key reuse security is an important security property considered in the NIST post-quantum cryptography algorithm standardization. At PKC’20, Zhang et al. proposed Aigis.KEM, a key encapsulation mechanism based on asymmetric MLWE. Aigis.KEM provides flexible parameter selection, has high comprehensive performance, and won the first prize of the China’s National cryptographic algorithm competition. However, its key reuse security is currently unclear. This paper studies the key reuse security of Aigis.KEM. Aigis.KEM is derived from public key encryption Aigis.PKE, so we will first assess its key reuse resilience using key recovery under plaintext-checking attack (KR-PCA). Then, we optimize the attack and proposes a two-positional KR-PCA attack to further approach the lower bound of attack complexity. We also verify these attacks through experiments, and discuss the further optimization and improvement. Finally, based on the KR-PCA attacks on Aigis.PKE, we further propose practical attacks on Aigis.KEM by utilizing side-channel attacks or fault-injection attacks. In response to these attacks, we explored possible countermeasures. The work helps to clarify the potential risks of Aigis.KEM and guide its application in practice.
{"title":"Analysis of key reuse security for Aigis.KEM","authors":"Ke Wang , Haodong Jiang , Zhenfeng Zhang , Long Chen , Huiqin Xie","doi":"10.1016/j.tcs.2025.115680","DOIUrl":"10.1016/j.tcs.2025.115680","url":null,"abstract":"<div><div>Key reuse security is an important security property considered in the NIST post-quantum cryptography algorithm standardization. At PKC’20, Zhang et al. proposed Aigis.KEM, a key encapsulation mechanism based on asymmetric MLWE. Aigis.KEM provides flexible parameter selection, has high comprehensive performance, and won the first prize of the China’s National cryptographic algorithm competition. However, its key reuse security is currently unclear. This paper studies the key reuse security of Aigis.KEM. Aigis.KEM is derived from public key encryption Aigis.PKE, so we will first assess its key reuse resilience using key recovery under plaintext-checking attack (KR-PCA). Then, we optimize the attack and proposes a two-positional KR-PCA attack to further approach the lower bound of attack complexity. We also verify these attacks through experiments, and discuss the further optimization and improvement. Finally, based on the KR-PCA attacks on Aigis.PKE, we further propose practical attacks on Aigis.KEM by utilizing side-channel attacks or fault-injection attacks. In response to these attacks, we explored possible countermeasures. The work helps to clarify the potential risks of Aigis.KEM and guide its application in practice.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1063 ","pages":"Article 115680"},"PeriodicalIF":1.0,"publicationDate":"2025-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1016/j.tcs.2025.115681
Liuyu Yang , Xinxuan Zhang , Yi Deng , Zhuo Wu , Xudong Zhu
Registered attribute-based signature (registered ABS), introduced by Zhang et al. (PKC’24), eliminates the key escrow problem associated with classical attribute-based signature (ABS). It allows users to generate public/sectret key pairs themselves and register the related public key and attribute with a key curator. Different from a trusted attribute authority, the key curator is fully transparent and retains no secrets. In this paper, we propose the first generic framework for anonymous registered ABS that supports circuits as policies. We achieve this goal through an approach we call “accumulate-then-sign-then-prove”, which leverages commonly used cryptographic primitives including digital signature, accumulator, and non-interactive zero-knowledge schemes (NIZKs). We further enrich the functionality by adding user removal, making our scheme dynamic. Our generic framework can be instantiated from various combinations of inner and outer layer protocols based on different assumptions. We provide recommendations from three different perspectives for the choice of concrete cryptographic schemes. Compared with current work on registered ABS, our framework: i) provides diversity regarding the assumptions to instantiate cryptographic primitives; ii) has advantages in proof size and verification time.
{"title":"Anonymous registered attribute-based signature for circuits","authors":"Liuyu Yang , Xinxuan Zhang , Yi Deng , Zhuo Wu , Xudong Zhu","doi":"10.1016/j.tcs.2025.115681","DOIUrl":"10.1016/j.tcs.2025.115681","url":null,"abstract":"<div><div>Registered attribute-based signature (registered ABS), introduced by Zhang et al. (PKC’24), eliminates the key escrow problem associated with classical attribute-based signature (ABS). It allows users to generate public/sectret key pairs themselves and register the related public key and attribute with a key curator. Different from a trusted attribute authority, the key curator is fully transparent and retains no secrets. In this paper, we propose the first generic framework for anonymous registered ABS that supports circuits as policies. We achieve this goal through an approach we call “accumulate-then-sign-then-prove”, which leverages commonly used cryptographic primitives including digital signature, accumulator, and non-interactive zero-knowledge schemes (NIZKs). We further enrich the functionality by adding user removal, making our scheme dynamic. Our generic framework can be instantiated from various combinations of inner and outer layer protocols based on different assumptions. We provide recommendations from three different perspectives for the choice of concrete cryptographic schemes. Compared with current work on registered ABS, our framework: i) provides diversity regarding the assumptions to instantiate cryptographic primitives; ii) has advantages in proof size and verification time.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1063 ","pages":"Article 115681"},"PeriodicalIF":1.0,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.tcs.2025.115678
Yuanrui Zhang , Xinxin Liu
We study a phenomenon called “image reflection” on a type of characterization graphs — LLEE charts — for 1-free regular expressions modulo bisimularity. This property, stating that the images of a bisimulation function from an LLEE chart actually impose a special LEE structure corresponding to the LLEE chart, is recognized by our proposed “well-structured looping-back charts” as a sub-LLEE-structure of LLEE charts. As an application, our study naturally leads to a novel proof for the completeness of the inference system for 1-free regular expressions, due to the correspondence between 1-free regular expressions and the provable solutions of LEE/LLEE charts. Compared to the previous approach, our proof is more direct in the sense that it does not rely on a graph transformation procedure on LLEE charts in which at each step two bisimilar nodes have to be carefully selected and merged together according to selection rules. Our observation on LLEE charts is useful to understand the completeness problems of regular expressions modulo bisimilarity from a new angle, and can be also helpful for solving the completeness problems of other expressions that share similar graph structures.
{"title":"Image reflection on process graphs of 1-free regular expressions modulo bisimilarity","authors":"Yuanrui Zhang , Xinxin Liu","doi":"10.1016/j.tcs.2025.115678","DOIUrl":"10.1016/j.tcs.2025.115678","url":null,"abstract":"<div><div>We study a phenomenon called “image reflection” on a type of characterization graphs — LLEE charts — for 1-free regular expressions modulo bisimularity. This property, stating that the images of a bisimulation function from an LLEE chart actually impose a special LEE structure corresponding to the LLEE chart, is recognized by our proposed “well-structured looping-back charts” as a sub-LLEE-structure of LLEE charts. As an application, our study naturally leads to a novel proof for the completeness of the inference system <span><math><mi>BBP</mi></math></span> for 1-free regular expressions, due to the correspondence between 1-free regular expressions and the provable solutions of LEE/LLEE charts. Compared to the previous approach, our proof is more direct in the sense that it does not rely on a graph transformation procedure on LLEE charts in which at each step two bisimilar nodes have to be carefully selected and merged together according to selection rules. Our observation on LLEE charts is useful to understand the completeness problems of regular expressions modulo bisimilarity from a new angle, and can be also helpful for solving the completeness problems of other expressions that share similar graph structures.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1063 ","pages":"Article 115678"},"PeriodicalIF":1.0,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.tcs.2025.115659
Chuanye Zheng, Liqiong Xu
The probability of vertex failure in interconnection networks will enhance with the increase of the system scale, so the fault diagnosis of interconnection networks deserves our attention and study. The diagnosability is a significant indicator to evaluate network reliability. For measuring the diagnosability of a given system more accurately, Ding et al. [1] came up with the non-inclusive diagnosability of a graph. Our work is to determine the lower bounds of non-inclusive diagnosability of a kind of networks under the PMC model and the MM* model, which can be applied to non-regular graphs and some graphs containing triangles. Finally, we propose the non-inclusive diagnosability of some famous networks under the two diagnostic models as applications.
{"title":"The non-inclusive diagnosability of a kind of networks","authors":"Chuanye Zheng, Liqiong Xu","doi":"10.1016/j.tcs.2025.115659","DOIUrl":"10.1016/j.tcs.2025.115659","url":null,"abstract":"<div><div>The probability of vertex failure in interconnection networks will enhance with the increase of the system scale, so the fault diagnosis of interconnection networks deserves our attention and study. The diagnosability is a significant indicator to evaluate network reliability. For measuring the diagnosability of a given system more accurately, Ding et al. [1] came up with the non-inclusive diagnosability of a graph. Our work is to determine the lower bounds of non-inclusive diagnosability of a kind of networks under the PMC model and the MM* model, which can be applied to non-regular graphs and some graphs containing triangles. Finally, we propose the non-inclusive diagnosability of some famous networks under the two diagnostic models as applications.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1062 ","pages":"Article 115659"},"PeriodicalIF":1.0,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145659151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1016/j.tcs.2025.115645
Lyes Attouche , Mohamed-Amine Baazizi , Dario Colazzo , Giorgio Ghelli , Stefan Klessinger , Carlo Sartiani , Stefanie Scherzinger
<div><div>JSON Schema is a declarative language that allows one to specify the structure of JSON instances using hierarchical schema objects that combine logical and structural operators.2.2 Early versions of JSON Schema, known collectively as Classical JSON Schema, operated with a straightforward semantics where a schema’s meaning was completely determined by which JSON values it could successfully validate. This simple foundation enabled researchers to develop robust theoretical frameworks and practical tools for instance validation and also to determine whether schemas are satisfiable or equivalent to one another. However, Classical JSON Schema had a significant weakness in its inability to effectively express certain kinds of extensions of object schemas.</div><div>This limitation prompted a major overhaul in Draft 2019-09, introducing two new features that fundamentally alter how JSON Schema works. The first is <em>annotation dependency</em>, where validation now produces more than just a yes/no result. When a schema validates a JSON instance, it also generates an “annotation” that records which fields and items were “evaluated”. This annotation then influences the behavior of the new operators "<span><math><mi>unevaluatedProperties</mi></math></span>" and "<span><math><mi>unevaluatedItems</mi></math></span>", creating a dependency that did not exist before. The second feature is dynamic references, a separate mechanism that allows for the target of a reference operator to depend on the validation context. These changes were so substantial that all JSON Schema versions from Draft 2019-09 onward are called <em>Modern JSON Schema</em>.</div><div>This semantic shift invalidated much of the existing theoretical work, and the algorithms that researchers had developed for Classical JSON Schema — particularly those for determining satisfiability and schema inclusion — do not easily adapt to Modern JSON Schema’s new behavior. One approach to bridge this gap is “elimination” — converting Modern JSON Schema constructs back into equivalent Classical JSON Schema forms. Previous research successfully developed algorithms for eliminating dynamic references, but annotation dependency remained unsolved.</div><div>In this paper we solve this problem, providing three contributions: an <em>expressibility</em> result, proving that eliminating annotation-dependent operators is possible; a <em>succinctness</em> result, proving that eliminating annotation-dependent operators can generally cause schemas to grow exponentially in size, and finally a <em>practical algorithm</em> to perform annotation elimination.</div><div>Our “practical algorithm” not only matches the asymptotic lower-bound that is provided by the succinctness theorem, but it also presents some specific optimizations that we designed to exploit typical features or real-world schemas. A comprehensive experimental testing, executed on a representative set of 305 schemas retrieved from GitHub, shows tha
{"title":"Elimination of annotation dependencies in validation for Modern JSON Schema","authors":"Lyes Attouche , Mohamed-Amine Baazizi , Dario Colazzo , Giorgio Ghelli , Stefan Klessinger , Carlo Sartiani , Stefanie Scherzinger","doi":"10.1016/j.tcs.2025.115645","DOIUrl":"10.1016/j.tcs.2025.115645","url":null,"abstract":"<div><div>JSON Schema is a declarative language that allows one to specify the structure of JSON instances using hierarchical schema objects that combine logical and structural operators.2.2 Early versions of JSON Schema, known collectively as Classical JSON Schema, operated with a straightforward semantics where a schema’s meaning was completely determined by which JSON values it could successfully validate. This simple foundation enabled researchers to develop robust theoretical frameworks and practical tools for instance validation and also to determine whether schemas are satisfiable or equivalent to one another. However, Classical JSON Schema had a significant weakness in its inability to effectively express certain kinds of extensions of object schemas.</div><div>This limitation prompted a major overhaul in Draft 2019-09, introducing two new features that fundamentally alter how JSON Schema works. The first is <em>annotation dependency</em>, where validation now produces more than just a yes/no result. When a schema validates a JSON instance, it also generates an “annotation” that records which fields and items were “evaluated”. This annotation then influences the behavior of the new operators \"<span><math><mi>unevaluatedProperties</mi></math></span>\" and \"<span><math><mi>unevaluatedItems</mi></math></span>\", creating a dependency that did not exist before. The second feature is dynamic references, a separate mechanism that allows for the target of a reference operator to depend on the validation context. These changes were so substantial that all JSON Schema versions from Draft 2019-09 onward are called <em>Modern JSON Schema</em>.</div><div>This semantic shift invalidated much of the existing theoretical work, and the algorithms that researchers had developed for Classical JSON Schema — particularly those for determining satisfiability and schema inclusion — do not easily adapt to Modern JSON Schema’s new behavior. One approach to bridge this gap is “elimination” — converting Modern JSON Schema constructs back into equivalent Classical JSON Schema forms. Previous research successfully developed algorithms for eliminating dynamic references, but annotation dependency remained unsolved.</div><div>In this paper we solve this problem, providing three contributions: an <em>expressibility</em> result, proving that eliminating annotation-dependent operators is possible; a <em>succinctness</em> result, proving that eliminating annotation-dependent operators can generally cause schemas to grow exponentially in size, and finally a <em>practical algorithm</em> to perform annotation elimination.</div><div>Our “practical algorithm” not only matches the asymptotic lower-bound that is provided by the succinctness theorem, but it also presents some specific optimizations that we designed to exploit typical features or real-world schemas. A comprehensive experimental testing, executed on a representative set of 305 schemas retrieved from GitHub, shows tha","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1063 ","pages":"Article 115645"},"PeriodicalIF":1.0,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1016/j.tcs.2025.115653
Neeldhara Misra , Harshil Mittal , Ashutosh Rai
We study the Boolean Satisfiability problem (SAT) in the framework of diversity, where one asks for multiple solutions that are mutually far apart (i.e., sufficiently dissimilar from each other) for a suitable notion of distance/dissimilarity between solutions. Interpreting assignments as bit vectors, we take their Hamming distance to quantify dissimilarity, and we focus on the problem of finding two solutions. Specifically, we define the problem MaxDiffer SAT (resp. Exact Differ SAT) as follows: Given a Boolean formula ϕ on n variables, decide whether ϕ has two satisfying assignments that differ on at least (resp. exactly) d variables. We study the classical and parameterized (in parameters d and ) complexities of Max Differ SAT and Exact Differ SAT, when restricted to some classes of formulas on which SAT is known to be polynomial-time solvable. In particular, we consider affine formulas, Krom formulas (i.e., 2-CNF formulas) and hitting formulas. For affine formulas, we show the following: Both problems are polynomial-time solvable when each equation has at most two variables. Exact Differ SAT is -hard, even when each equation has at most three variables and each variable appears in at most four equations. Also, Max Differ SAT is -hard, even when each equation has at most four variables. Both problems are -hard in the parameter . In contrast, when parameterized by d, Exact Differ SAT is -hard, but Max Differ SAT admits a single-exponential algorithm and a polynomial-kernel. For Krom formulas, we show the following: Both problems are polynomial-time solvable when each variable appears in at most two clauses. Also, both problems are -hard in the parameter d (and therefore, it turns out, also -hard), even on monotone inputs (i.e., formulas with no negative literals). Finally, for hitting formulas, we show that both problems can be solved in polynomial-time.
{"title":"On the parameterized complexity of diverse SAT","authors":"Neeldhara Misra , Harshil Mittal , Ashutosh Rai","doi":"10.1016/j.tcs.2025.115653","DOIUrl":"10.1016/j.tcs.2025.115653","url":null,"abstract":"<div><div>We study the <span>Boolean Satisfiability problem (SAT)</span> in the framework of diversity, where one asks for multiple solutions that are mutually far apart (i.e., sufficiently dissimilar from each other) for a suitable notion of distance/dissimilarity between solutions. Interpreting assignments as bit vectors, we take their Hamming distance to quantify dissimilarity, and we focus on the problem of finding two solutions. Specifically, we define the problem <span>Max</span> <span>Differ SAT</span> (resp. <span>Exact Differ SAT</span>) as follows: Given a Boolean formula <em>ϕ</em> on <em>n</em> variables, decide whether <em>ϕ</em> has two satisfying assignments that differ on at least (resp. exactly) <em>d</em> variables. We study the classical and parameterized (in parameters <em>d</em> and <span><math><mrow><mi>n</mi><mo>−</mo><mi>d</mi></mrow></math></span>) complexities of <span>Max Differ SAT</span> and <span>Exact Differ SAT</span>, when restricted to some classes of formulas on which SAT is known to be polynomial-time solvable. In particular, we consider affine formulas, Krom formulas (i.e., 2-CNF formulas) and hitting formulas. For affine formulas, we show the following: Both problems are polynomial-time solvable when each equation has at most two variables. <span>Exact Differ SAT</span> is <span><math><mi>NP</mi></math></span>-hard, even when each equation has at most three variables and each variable appears in at most four equations. Also, <span>Max Differ SAT</span> is <span><math><mi>NP</mi></math></span>-hard, even when each equation has at most four variables. Both problems are <span><math><mrow><mi>W</mi><mo>[</mo><mn>1</mn><mo>]</mo></mrow></math></span>-hard in the parameter <span><math><mrow><mi>n</mi><mo>−</mo><mi>d</mi></mrow></math></span>. In contrast, when parameterized by <em>d</em>, <span>Exact Differ SAT</span> is <span><math><mrow><mi>W</mi><mo>[</mo><mn>1</mn><mo>]</mo></mrow></math></span>-hard, but <span>Max Differ SAT</span> admits a single-exponential <span><math><mi>FPT</mi></math></span> algorithm and a polynomial-kernel. For Krom formulas, we show the following: Both problems are polynomial-time solvable when each variable appears in at most two clauses. Also, both problems are <span><math><mrow><mi>W</mi><mo>[</mo><mn>1</mn><mo>]</mo></mrow></math></span>-hard in the parameter <em>d</em> (and therefore, it turns out, also <span><math><mi>NP</mi></math></span>-hard), even on monotone inputs (i.e., formulas with no negative literals). Finally, for hitting formulas, we show that both problems can be solved in polynomial-time.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1062 ","pages":"Article 115653"},"PeriodicalIF":1.0,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145659150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1016/j.tcs.2025.115656
Kyle Burke , Matthew Ferland , Svenja Huntemann , Shang-Hua Teng
In this paper, we address a natural question at the intersection of combinatorial game theory and computational complexity: “Can a sum of simple tepid games in canonical form be intractable?” To resolve this fundamental question, we consider superstars, positions first introduced in Winning Ways where all options are nimbers. Extending Morris’ classic result with hot games to tepid games, we prove that disjunctive sums of superstars are intractable to solve. This is striking as sums of nimbers can be computed in linear time. Our analysis shows that the game Paint Can is intractable and also yields a new intractable game, Blackout. We present web-playable versions of both games.
{"title":"A tractability gap beyond nim-sums: It’s hard to tell whether a bunch of superstars are losers","authors":"Kyle Burke , Matthew Ferland , Svenja Huntemann , Shang-Hua Teng","doi":"10.1016/j.tcs.2025.115656","DOIUrl":"10.1016/j.tcs.2025.115656","url":null,"abstract":"<div><div>In this paper, we address a natural question at the intersection of combinatorial game theory and computational complexity: “Can a sum of simple <em>tepid games</em> in canonical form be intractable?” To resolve this fundamental question, we consider <em>superstars</em>, positions first introduced in <em>Winning Ways</em> where all options are <em>nimbers</em>. Extending Morris’ classic result with hot games to tepid games, we prove that disjunctive sums of superstars are intractable to solve. This is striking as sums of nimbers can be computed in linear time. Our analysis shows that the game <span>Paint Can</span> is intractable and also yields a new intractable game, <span>Blackout</span>. We present web-playable versions of both games.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1062 ","pages":"Article 115656"},"PeriodicalIF":1.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The independent set reconfiguration problem (ISReconf) is the problem of determining, for two given independent sets of a graph, whether one can be transformed into the other by repeatedly applying a prescribed reconfiguration rule. There are two well-studied reconfiguration rules, called the Token Sliding (TS) rule and the Token Jumping (TJ) rule, and it is known that the complexity status of ISReconf differs between the TS and TJ rules for some graph classes. In this paper, we analyze how changes in reconfiguration rules affect the computational complexity of ISReconf. To this end, we generalize the TS and TJ rules to a unified reconfiguration rule, called the k-Jump rule, which removes one vertex from a current independent set and adds a vertex within distance k from the removed vertex to obtain another independent set having the same cardinality. We give the following three results: First, we show that the reconfigurability of any ISReconf instance does not change for all k ≥ 3. Second, we present a polynomial-time algorithm to solve ISReconf under the 2-Jump rule for split graphs. Third, we consider the shortest variant of ISReconf, which determines whether there is a transformation of at most ℓ steps, for a given integer ℓ ≥ 0. We prove that this shortest variant under the k-Jump rule is NP-complete for chordal graphs of diameter at most , for any k ≥ 3.
{"title":"Independent set reconfiguration under bounded-hop token jumping","authors":"Hiroki Hatano , Naoki Kitamura , Taisuke Izumi , Takehiro Ito , Toshimitsu Masuzawa","doi":"10.1016/j.tcs.2025.115651","DOIUrl":"10.1016/j.tcs.2025.115651","url":null,"abstract":"<div><div>The independent set reconfiguration problem (<span>ISReconf</span>) is the problem of determining, for two given independent sets of a graph, whether one can be transformed into the other by repeatedly applying a prescribed reconfiguration rule. There are two well-studied reconfiguration rules, called the Token Sliding (TS) rule and the Token Jumping (TJ) rule, and it is known that the complexity status of <span>ISReconf</span> differs between the TS and TJ rules for some graph classes. In this paper, we analyze how changes in reconfiguration rules affect the computational complexity of <span>ISReconf</span>. To this end, we generalize the TS and TJ rules to a unified reconfiguration rule, called the <em>k</em>-Jump rule, which removes one vertex from a current independent set and adds a vertex within distance <em>k</em> from the removed vertex to obtain another independent set having the same cardinality. We give the following three results: First, we show that the reconfigurability of any <span>ISReconf</span> instance does not change for all <em>k</em> ≥ 3. Second, we present a polynomial-time algorithm to solve <span>ISReconf</span> under the 2-Jump rule for split graphs. Third, we consider the shortest variant of <span>ISReconf</span>, which determines whether there is a transformation of at most ℓ steps, for a given integer ℓ ≥ 0. We prove that this shortest variant under the <em>k</em>-Jump rule is NP-complete for chordal graphs of diameter at most <span><math><mrow><mn>2</mn><mi>k</mi><mo>+</mo><mn>1</mn></mrow></math></span>, for any <em>k</em> ≥ 3.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1062 ","pages":"Article 115651"},"PeriodicalIF":1.0,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1016/j.tcs.2025.115660
Istvan Miklos , Cordian Riener
We establish the #P-hardness of computing a broad class of immanants, even when restricted to specific categories of matrices. Concretely, we prove that computing λ-immanants of matrices is #P-hard whenever the partition λ contains a sufficiently large domino-tileable region, subject to certain technical conditions.
We also give hardness proofs for some λ-immanants of weighted adjacency matrices of planar directed graphs, such that the shape has size n such that for some , and such that for some w, the shape λd/(w) is tileable with 1 × 2 dominos.
我们建立了计算广义内变量的# p -硬度,即使局限于特定类别的矩阵。具体地,我们证明了在一定的技术条件下,当划分λ包含一个足够大的多米诺骨牌区域时,计算0−1矩阵的λ-内积是#P-hard的。我们还给出了平面有向图的加权邻接矩阵的一些λ-内量的硬度证明,使得形状λ=(1+λd)的大小为n,使得对于某些0<;ε<12,形状λd/(w)可以铺成1 × 2个多米诺骨牌。
{"title":"#P-Hardness proofs of matrix immanants evaluated on restricted matrices","authors":"Istvan Miklos , Cordian Riener","doi":"10.1016/j.tcs.2025.115660","DOIUrl":"10.1016/j.tcs.2025.115660","url":null,"abstract":"<div><div>We establish the #<em>P</em>-hardness of computing a broad class of immanants, even when restricted to specific categories of matrices. Concretely, we prove that computing <em>λ</em>-immanants of <span><math><mrow><mn>0</mn><mspace></mspace><mo>−</mo><mspace></mspace><mn>1</mn></mrow></math></span> matrices is #<em>P</em>-hard whenever the partition <em>λ</em> contains a sufficiently large domino-tileable region, subject to certain technical conditions.</div><div>We also give hardness proofs for some <em>λ</em>-immanants of weighted adjacency matrices of planar directed graphs, such that the shape <span><math><mrow><mi>λ</mi><mo>=</mo><mo>(</mo><mn>1</mn><mo>+</mo><msub><mi>λ</mi><mi>d</mi></msub><mo>)</mo></mrow></math></span> has size <em>n</em> such that <span><math><mrow><mrow><mo>|</mo></mrow><msub><mi>λ</mi><mi>d</mi></msub><mrow><mo>|</mo><mo>=</mo></mrow><msup><mi>n</mi><mrow><mi>ε</mi></mrow></msup></mrow></math></span> for some <span><math><mrow><mn>0</mn><mo><</mo><mrow><mi>ε</mi></mrow><mo><</mo><mfrac><mn>1</mn><mn>2</mn></mfrac></mrow></math></span>, and such that for some <em>w</em>, the shape <em>λ<sub>d</sub></em>/(<em>w</em>) is tileable with 1 × 2 dominos.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1062 ","pages":"Article 115660"},"PeriodicalIF":1.0,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145659152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}