Pub Date : 2025-05-02DOI: 10.1007/s10623-025-01633-9
Gustave Tchoffo Saah, Tako Boris Fouotsa, Emmanuel Fouotsa, Célestin Nkuimi-Jugnia
In 2021, Sterner proposed a commitment scheme based on supersingular isogenies. For this scheme to be binding, one relies on a trusted party to generate a starting supersingular elliptic curve of unknown endomorphism ring. In fact, the knowledge of the endomorphism ring allows one to compute an endomorphism of degree a power of a given small prime. Such an endomorphism can then be split into two to obtain two different messages with the same commitment. This is the reason why one needs a curve of unknown endomorphism ring, and the only known way to generate such supersingular curves is to rely on a trusted party or on some expensive multiparty computation. We observe that if the degree of the endomorphism in play is well chosen, then the knowledge of the endomorphism ring is not sufficient to efficiently compute such an endomorphism and in some particular cases, one can even prove that endomorphism of a certain degree do not exist. Leveraging these observations, we adapt Sterner’s commitment scheme in such a way that the endomorphism ring of the starting curve can be known and public. This allows us to obtain isogeny-based commitment schemes which can be instantiated without trusted setup requirements.
{"title":"Avoiding trusted setup in isogeny-based commitments","authors":"Gustave Tchoffo Saah, Tako Boris Fouotsa, Emmanuel Fouotsa, Célestin Nkuimi-Jugnia","doi":"10.1007/s10623-025-01633-9","DOIUrl":"https://doi.org/10.1007/s10623-025-01633-9","url":null,"abstract":"<p>In 2021, Sterner proposed a commitment scheme based on supersingular isogenies. For this scheme to be binding, one relies on a trusted party to generate a starting supersingular elliptic curve of unknown endomorphism ring. In fact, the knowledge of the endomorphism ring allows one to compute an endomorphism of degree a power of a given small prime. Such an endomorphism can then be split into two to obtain two different messages with the same commitment. This is the reason why one needs a curve of unknown endomorphism ring, and the only known way to generate such supersingular curves is to rely on a trusted party or on some expensive multiparty computation. We observe that if the degree of the endomorphism in play is well chosen, then the knowledge of the endomorphism ring is not sufficient to efficiently compute such an endomorphism and in some particular cases, one can even prove that endomorphism of a certain degree do not exist. Leveraging these observations, we adapt Sterner’s commitment scheme in such a way that the endomorphism ring of the starting curve can be known and public. This allows us to obtain isogeny-based commitment schemes which can be instantiated without trusted setup requirements.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"51 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01DOI: 10.1007/s10623-025-01635-7
Andrea C. Burgess, Nicholas J. Cavenagh, Peter Danziger, David A. Pike
A (delta )-colouring of the point set of a block design is said to be weak if no block is monochromatic. The chromatic number(chi (S)) of a block design S is the smallest integer (delta ) such that S has a weak (delta )-colouring. It has previously been shown that any Steiner triple system has chromatic number at least 3 and that for each (vequiv 1) or (3pmod {6}) there exists a Steiner triple system on v points that has chromatic number 3. Moreover, for each integer (delta geqslant 3) there exist infinitely many Steiner triple systems with chromatic number (delta ). We consider colourings of the subclass of Steiner triple systems which are resolvable. A Kirkman triple system consists of a resolvable Steiner triple system together with a partition of its blocks into parallel classes. We show that for each (vequiv 3pmod {6}) there exists a Kirkman triple system on v points with chromatic number 3. We also show that for each integer (delta geqslant 3), there exist infinitely many Kirkman triple systems with chromatic number (delta ). We close with several open problems.
{"title":"Weak colourings of Kirkman triple systems","authors":"Andrea C. Burgess, Nicholas J. Cavenagh, Peter Danziger, David A. Pike","doi":"10.1007/s10623-025-01635-7","DOIUrl":"https://doi.org/10.1007/s10623-025-01635-7","url":null,"abstract":"<p>A <span>(delta )</span>-colouring of the point set of a block design is said to be <i>weak</i> if no block is monochromatic. The <i>chromatic number</i> <span>(chi (S))</span> of a block design <i>S</i> is the smallest integer <span>(delta )</span> such that <i>S</i> has a weak <span>(delta )</span>-colouring. It has previously been shown that any Steiner triple system has chromatic number at least 3 and that for each <span>(vequiv 1)</span> or <span>(3pmod {6})</span> there exists a Steiner triple system on <i>v</i> points that has chromatic number 3. Moreover, for each integer <span>(delta geqslant 3)</span> there exist infinitely many Steiner triple systems with chromatic number <span>(delta )</span>. We consider colourings of the subclass of Steiner triple systems which are resolvable. A <i>Kirkman triple system</i> consists of a resolvable Steiner triple system together with a partition of its blocks into parallel classes. We show that for each <span>(vequiv 3pmod {6})</span> there exists a Kirkman triple system on <i>v</i> points with chromatic number 3. We also show that for each integer <span>(delta geqslant 3)</span>, there exist infinitely many Kirkman triple systems with chromatic number <span>(delta )</span>. We close with several open problems.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"26 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143893852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01DOI: 10.1007/s10623-025-01640-w
Shuiyin Liu, Amin Sakzad
This paper describes a constant-time lattice encoder for the National Institute of Standards and Technology (NIST) recommended post-quantum encryption algorithm: Kyber. The first main contribution of this paper is to refine the analysis of Kyber decoding noise and prove that Kyber decoding noise can be bounded by a sphere. This result shows that the Kyber encoding problem is essentially a sphere packing in a hypercube. The original Kyber encoder uses the integer lattice for sphere packing purposes, which is far from optimal. Our second main contribution is to construct optimal lattice codes to ensure denser packing and a lower decryption failure rate (DFR). Given the same ciphertext size as the original Kyber, the proposed lattice encoder enjoys a larger decoding radius, and is able to encode much more information bits. This way we achieve a decrease of the communication cost by up to (32.6%), and a reduction of the DFR by a factor of up to (2^{85}). Given the same plaintext size as the original Kyber, e.g., 256 bits, we propose a bit-interleaved coded modulation (BICM) approach, which combines a BCH code and the proposed lattice encoder. The proposed BICM scheme significantly reduces the DFR of Kyber, thus enabling further compression of the ciphertext. Compared with the original Kyber encoder, the communication cost is reduced by (24.49%), while the DFR is decreased by a factor of (2^{39}). The proposed encoding scheme is a constant-time algorithm, thus resistant against the timing side-channel attacks.
{"title":"Lattice codes for CRYSTALS-Kyber","authors":"Shuiyin Liu, Amin Sakzad","doi":"10.1007/s10623-025-01640-w","DOIUrl":"https://doi.org/10.1007/s10623-025-01640-w","url":null,"abstract":"<p>This paper describes a constant-time lattice encoder for the National Institute of Standards and Technology (NIST) recommended post-quantum encryption algorithm: Kyber. The first main contribution of this paper is to refine the analysis of Kyber decoding noise and prove that Kyber decoding noise can be bounded by a sphere. This result shows that the Kyber encoding problem is essentially a sphere packing in a hypercube. The original Kyber encoder uses the integer lattice for sphere packing purposes, which is far from optimal. Our second main contribution is to construct optimal lattice codes to ensure denser packing and a lower decryption failure rate (DFR). Given the same ciphertext size as the original Kyber, the proposed lattice encoder enjoys a larger decoding radius, and is able to encode much more information bits. This way we achieve a decrease of the communication cost by up to <span>(32.6%)</span>, and a reduction of the DFR by a factor of up to <span>(2^{85})</span>. Given the same plaintext size as the original Kyber, e.g., 256 bits, we propose a bit-interleaved coded modulation (BICM) approach, which combines a BCH code and the proposed lattice encoder. The proposed BICM scheme significantly reduces the DFR of Kyber, thus enabling further compression of the ciphertext. Compared with the original Kyber encoder, the communication cost is reduced by <span>(24.49%)</span>, while the DFR is decreased by a factor of <span>(2^{39})</span>. The proposed encoding scheme is a constant-time algorithm, thus resistant against the timing side-channel attacks.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"114 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143893779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24DOI: 10.1007/s10623-025-01591-2
Max Schulz
Rational transformations play an important role in the construction of irreducible polynomials over finite fields. Usually, the methods involve fixing a rational function Q and deriving conditions on polynomials (Fin mathbb {F}_q[x]) such that the rational transformation of F with Q is irreducible. Here we want to change the perspective and study rational functions with which the rational transformation never yields irreducible polynomials. We show that if the rational function is contained in certain subfields of (mathbb {F}_q(x)) then the rational transformation with it is always reducible. This extends the list of known examples.
{"title":"Rational transformations over finite fields that are never irreducible","authors":"Max Schulz","doi":"10.1007/s10623-025-01591-2","DOIUrl":"https://doi.org/10.1007/s10623-025-01591-2","url":null,"abstract":"<p>Rational transformations play an important role in the construction of irreducible polynomials over finite fields. Usually, the methods involve fixing a rational function <i>Q</i> and deriving conditions on polynomials <span>(Fin mathbb {F}_q[x])</span> such that the rational transformation of <i>F</i> with <i>Q</i> is irreducible. Here we want to change the perspective and study rational functions with which the rational transformation never yields irreducible polynomials. We show that if the rational function is contained in certain subfields of <span>(mathbb {F}_q(x))</span> then the rational transformation with it is always reducible. This extends the list of known examples.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"3 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143872961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-23DOI: 10.1007/s10623-025-01632-w
Joshua Cooper, Jack Hyatt
We characterize the permutations of (mathbb {F}_q) whose graph minimizes the number of collinear triples and describe the lexicographically-least one, confirming a conjecture of Cooper-Solymosi. This question is connected to Dudeney’s No-3-in-a-Line problem, the Heilbronn triangle problem, and the structure of finite plane Kakeya sets. We discuss a connection with complete sets of mutually orthogonal latin squares and state a few open problems primarily about general finite affine planes.
{"title":"Permutations minimizing the number of collinear triples","authors":"Joshua Cooper, Jack Hyatt","doi":"10.1007/s10623-025-01632-w","DOIUrl":"https://doi.org/10.1007/s10623-025-01632-w","url":null,"abstract":"<p>We characterize the permutations of <span>(mathbb {F}_q)</span> whose graph minimizes the number of collinear triples and describe the lexicographically-least one, confirming a conjecture of Cooper-Solymosi. This question is connected to Dudeney’s No-3-in-a-Line problem, the Heilbronn triangle problem, and the structure of finite plane Kakeya sets. We discuss a connection with complete sets of mutually orthogonal latin squares and state a few open problems primarily about general finite affine planes.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"5 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143866496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-19DOI: 10.1007/s10623-025-01630-y
Xue Jia, Qin Yue, Huan Sun
Twisted generalized Reed–Solomon (TGRS) codes as a generalization of generalized Reed–Solomon (GRS) codes have attracted a lot of attention from many researchers in recent years. In this paper, we investigate the conditions for the equality of two classes of TGRS codes with different parameters. Moreover, we construct the permutation automorphism groups of two classes of TGRS codes and show they are quasi-cyclic codes. Finally, building upon the Berlekamp–Massey algorithm for GRS codes, we show a decoding scheme for a class of MDS TGRS codes.
{"title":"Coding properties and automorphism groups of two classes of twisted generalized Reed–Solomon codes","authors":"Xue Jia, Qin Yue, Huan Sun","doi":"10.1007/s10623-025-01630-y","DOIUrl":"https://doi.org/10.1007/s10623-025-01630-y","url":null,"abstract":"<p>Twisted generalized Reed–Solomon (TGRS) codes as a generalization of generalized Reed–Solomon (GRS) codes have attracted a lot of attention from many researchers in recent years. In this paper, we investigate the conditions for the equality of two classes of TGRS codes with different parameters. Moreover, we construct the permutation automorphism groups of two classes of TGRS codes and show they are quasi-cyclic codes. Finally, building upon the Berlekamp–Massey algorithm for GRS codes, we show a decoding scheme for a class of MDS TGRS codes.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"65 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-19DOI: 10.1007/s10623-025-01626-8
Axel Lemoine, Rocco Mora, Jean-Pierre Tillich
Distinguishing Goppa codes or alternant codes from generic linear codes (Faugère et al. in Proceedings of the IEEE Information Theory Workshop—ITW 2011, Paraty, Brasil, October 2011, pp. 282–286, 2011) has been shown to be a first step before being able to attack McEliece cryptosystem based on those codes (Bardet et al. in IEEE Trans Inf Theory 70(6):4492–4511, 2024). Whereas the distinguisher of Faugère et al. (2011) is only able to distinguish Goppa codes or alternant codes of rate very close to 1, in Couvreur et al. (in: Guo and Steinfeld (eds) Advances in Cryptology—ASIACRYPT 2023—29th International Conference on the Theory and Application of Cryptology and Information Security, Guangzhou, China, December 4–8, 2023, Proceedings, Part IV, Volume 14441 of LNCS, pp. 3–38, Springer, 2023) a much more powerful (and more general) distinguisher was proposed. It is based on computing the Hilbert series ({{{,textrm{HF},}}(d),;d in mathbb {N}}) of a Pfaffian modeling. The distinguisher of Faugère et al. (2011) can be interpreted as computing ({{,textrm{HF},}}(1)). Computing ({{,textrm{HF},}}(2)) still gives a polynomial time distinguisher for alternant or Goppa codes and is apparently able to distinguish Goppa or alternant codes in a much broader regime of rates as the one of Faugère et al. (2011). However, the scope of this distinguisher was unclear. We give here a formula for ({{,textrm{HF},}}(2)) corresponding to generic alternant codes when the field size q satisfies (q geqslant r), where r is the degree of the alternant code. We also show that this expression for ({{,textrm{HF},}}(2)) provides a lower bound in general. The value of ({{,textrm{HF},}}(2)) corresponding to random linear codes is known and this yields a precise description of the new regime of rates that can be distinguished by this new method. This shows that the new distinguisher improves significantly upon the one given in Faugère et al. (2011).
从一般线性码中区分Goppa码或替代码(faug等人在IEEE信息理论研讨会- itw 2011, Paraty, Brasil, 2011年10月,pp. 282-286, 2011)已被证明是能够攻击基于这些代码的McEliece密码系统的第一步(Bardet等人在IEEE Trans Inf Theory 70(6):4492 - 4511,2024)。faug et al.(2011)的鉴别器只能区分率非常接近1的Goppa码或替代码,而在Couvreur et al. (in: Guo and Steinfeld (eds) Advances in Cryptology - asiacrypt 2023 - 29 International Conference on the Theory and Application of cryptoology and Information Security, Guangzhou, China, December 4-8, 2023, Proceedings, Part IV, Volume 14441 of LNCS, pp. 3-38,施普林格,2023)中,提出了一个更强大(和更通用)的鉴别器。它是基于计算希尔伯特级数({{{,textrm{HF},}}(d),;d in mathbb {N}})的一个Pfaffian模型。faugires et al.(2011)的区分符可以理解为计算({{,textrm{HF},}}(1))。计算({{,textrm{HF},}}(2))仍然为交替码或Goppa码提供了一个多项式时间区分符,并且显然能够在更广泛的速率范围内区分Goppa码或交替码,如faug等人(2011)。然而,这一区分的范围并不清楚。当字段大小q满足(q geqslant r)时,我们给出了对应于通用交替码的({{,textrm{HF},}}(2))公式,其中r为交替码的程度。我们还证明了({{,textrm{HF},}}(2))的表达式通常提供了一个下界。随机线性码对应的({{,textrm{HF},}}(2))值是已知的,这产生了可以用这种新方法区分的新比率制度的精确描述。这表明新的区分符比faugires et al.(2011)给出的区分符有了显著的改进。
{"title":"Understanding the new distinguisher of alternant codes at degree 2","authors":"Axel Lemoine, Rocco Mora, Jean-Pierre Tillich","doi":"10.1007/s10623-025-01626-8","DOIUrl":"https://doi.org/10.1007/s10623-025-01626-8","url":null,"abstract":"<p>Distinguishing Goppa codes or alternant codes from generic linear codes (Faugère et al. in Proceedings of the IEEE Information Theory Workshop—ITW 2011, Paraty, Brasil, October 2011, pp. 282–286, 2011) has been shown to be a first step before being able to attack McEliece cryptosystem based on those codes (Bardet et al. in IEEE Trans Inf Theory 70(6):4492–4511, 2024). Whereas the distinguisher of Faugère et al. (2011) is only able to distinguish Goppa codes or alternant codes of rate very close to 1, in Couvreur et al. (in: Guo and Steinfeld (eds) Advances in Cryptology—ASIACRYPT 2023—29th International Conference on the Theory and Application of Cryptology and Information Security, Guangzhou, China, December 4–8, 2023, Proceedings, Part IV, Volume 14441 of LNCS, pp. 3–38, Springer, 2023) a much more powerful (and more general) distinguisher was proposed. It is based on computing the Hilbert series <span>({{{,textrm{HF},}}(d),;d in mathbb {N}})</span> of a Pfaffian modeling. The distinguisher of Faugère et al. (2011) can be interpreted as computing <span>({{,textrm{HF},}}(1))</span>. Computing <span>({{,textrm{HF},}}(2))</span> still gives a polynomial time distinguisher for alternant or Goppa codes and is apparently able to distinguish Goppa or alternant codes in a much broader regime of rates as the one of Faugère et al. (2011). However, the scope of this distinguisher was unclear. We give here a formula for <span>({{,textrm{HF},}}(2))</span> corresponding to generic alternant codes when the field size <i>q</i> satisfies <span>(q geqslant r)</span>, where <i>r</i> is the degree of the alternant code. We also show that this expression for <span>({{,textrm{HF},}}(2))</span> provides a lower bound in general. The value of <span>({{,textrm{HF},}}(2))</span> corresponding to random linear codes is known and this yields a precise description of the new regime of rates that can be distinguished by this new method. This shows that the new distinguisher improves significantly upon the one given in Faugère et al. (2011).</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"17 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-18DOI: 10.1007/s10623-025-01631-x
Chengyu Sun, Xin Wang
Frameproof codes are used to fingerprint digital data. It can prevent copyrighted materials from unauthorized use. To determine the maximum size of the frameproof codes is a crucial problem in this research area. In this paper, we study the upper bounds for frameproof codes under Boneh-Shaw descendant (wide-sense descendant). First, we give new upper bounds for wide-sense 2-frameproof codes to improve the known results. Then we take the alphabet size into consideration and answer an open question in this area. Finally, we improve the general upper bounds for wide-sense t-frameproof codes.
{"title":"New upper bounds for wide-sense frameproof codes","authors":"Chengyu Sun, Xin Wang","doi":"10.1007/s10623-025-01631-x","DOIUrl":"https://doi.org/10.1007/s10623-025-01631-x","url":null,"abstract":"<p>Frameproof codes are used to fingerprint digital data. It can prevent copyrighted materials from unauthorized use. To determine the maximum size of the frameproof codes is a crucial problem in this research area. In this paper, we study the upper bounds for frameproof codes under Boneh-Shaw descendant (wide-sense descendant). First, we give new upper bounds for wide-sense 2-frameproof codes to improve the known results. Then we take the alphabet size into consideration and answer an open question in this area. Finally, we improve the general upper bounds for wide-sense <i>t</i>-frameproof codes.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"28 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-18DOI: 10.1007/s10623-025-01615-x
Altan B. Kılıç, Anne Nijsten, Ruud Pellikaan, Alberto Ravagnani
This paper builds a novel bridge between algebraic coding theory and mathematical knot theory, with applications in both directions. We give methods to construct error-correcting codes starting from the colorings of a knot, describing through a series of results how the properties of the knot translate into code parameters. We show that knots can be used to obtain error-correcting codes with prescribed parameters and an efficient decoding algorithm.
{"title":"Knot theory and error-correcting codes","authors":"Altan B. Kılıç, Anne Nijsten, Ruud Pellikaan, Alberto Ravagnani","doi":"10.1007/s10623-025-01615-x","DOIUrl":"https://doi.org/10.1007/s10623-025-01615-x","url":null,"abstract":"<p>This paper builds a novel bridge between algebraic coding theory and mathematical knot theory, with applications in both directions. We give methods to construct error-correcting codes starting from the colorings of a knot, describing through a series of results how the properties of the knot translate into code parameters. We show that knots can be used to obtain error-correcting codes with prescribed parameters and an efficient decoding algorithm.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"10 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-10DOI: 10.1007/s10623-025-01590-3
Yuqing Zhu, Chang Lv, Jiqiang Liu
The hardness of discrete logarithm problem (DLP) over finite fields forms the security foundation of many cryptographic schemes. When the characteristic is not small, the state-of-the-art algorithms for solving the DLP are the number field sieve (NFS) and its variants. NFS first computes the logarithms of the factor base, which consists of elements of small norms. Then, for a target element, its logarithm is calculated by establishing a relation with the factor base. Although computing the factor-base elements is the most time-consuming part of NFS, it can be performed only once and treated as pre-computation for a fixed finite field when multiple logarithms need to be computed. In this paper, we present a method for accelerating individual logarithm computation by utilizing two subfields. We focus on the case where the extension degree of the finite field is a multiple of 6 within the extended tower number field sieve framework. Our method allows for the construction of an element with a lower degree, while maintaining the same coefficient bound compared to Guillevic’s method, which uses only one subfield. Consequently, the element derived from our approach enjoys a smaller norm, which will improve the efficiency in individual logarithm computation.
{"title":"Utilizing two subfields to accelerate individual logarithm computation in extended tower number field sieve","authors":"Yuqing Zhu, Chang Lv, Jiqiang Liu","doi":"10.1007/s10623-025-01590-3","DOIUrl":"https://doi.org/10.1007/s10623-025-01590-3","url":null,"abstract":"<p>The hardness of discrete logarithm problem (DLP) over finite fields forms the security foundation of many cryptographic schemes. When the characteristic is not small, the state-of-the-art algorithms for solving the DLP are the number field sieve (NFS) and its variants. NFS first computes the logarithms of the factor base, which consists of elements of small norms. Then, for a target element, its logarithm is calculated by establishing a relation with the factor base. Although computing the factor-base elements is the most time-consuming part of NFS, it can be performed only once and treated as pre-computation for a fixed finite field when multiple logarithms need to be computed. In this paper, we present a method for accelerating individual logarithm computation by utilizing two subfields. We focus on the case where the extension degree of the finite field is a multiple of 6 within the extended tower number field sieve framework. Our method allows for the construction of an element with a lower degree, while maintaining the same coefficient bound compared to Guillevic’s method, which uses only one subfield. Consequently, the element derived from our approach enjoys a smaller norm, which will improve the efficiency in individual logarithm computation.</p>","PeriodicalId":11130,"journal":{"name":"Designs, Codes and Cryptography","volume":"26 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143819557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}