{"title":"Public Key Generation Principles Impact Cybersecurity","authors":"N. Stoianov, Andrey Ivanov","doi":"10.11610/isij.4717","DOIUrl":null,"url":null,"abstract":"A R T I C L E I N F O : RECEIVED: 22 JUNE 2020 REVISED: 08 SEP 2020 ONLINE: 22 SEP 2020 K E Y W O R D S : public key cryptography, Miller–Rabin primality test improvement, cybersecurity Creative Commons BY-NC 4.0 Introduction Nowadays the growing use of online communications over the Internet and the associated threats to the data we exchange, requires sufficient and reliable protection of the information exchanged. One of the most reliable and basic method to make information secure, when two communicating parties don't know each other, is public key cryptography. Hard-to-solve mathematical problems are used to realize the mathematical foundations of the existing algorithms using public key cryptography. 2 In this mathematics, integer operations with large numbers are used and are based on modulo calculations of large prime numbers. Large prime numbers are also used to produce the user’s cryptographic keys (public and private). N. Stoianov & A. Ivanov, ISIJ 47, no. 2 (2020): 249-260 250 We could state that the security of the exchanged data protected by public key cryptography is due to two main facts: the difficulty of solving a mathematical algorithm and the reliability of the generated prime numbers used as keys in such a system. In this paper we will consider deterministic and probabilistic primality tests and will focus over the most widely used in practice an algorithm for testing prime numbers, that of Miller-Rabin 3, 4 and we will propose a new addition to it, which will increase the reliability of the estimation that this probabilistic algorithm gives. Public Key Generation and The Importance of The Prime Numbers Public key cryptography algorithms are based on two main things: difficult to solve mathematical problems and prime numbers with big values that serve as user private keys. If that prime numbers are generated not according to the prescribed rules or are not reliably confirmed as such, the security strength of protected data could be not enough. In an effort to ensure better protection of information, new algorithms and rules for generating private keys and the prime numbers involved in their compilation are created and proposed. 6 The more than 85% from certificate authorities (CA) based their root certificate security by using RSA encryption and signing scheme. Approximately of 10% of CA combinate both RSA and ECDSA cryptographic schemes to protect their public key infrastructure (PKI). This statement is based on our study in which we analysed the certificates stored into Windows, Android and Linux operating systems (OS) certificates stores. These operating systems are the most commonly used worldwide. We can say that a reliable estimate of divisibility of numbers is essential. Connected with this we will consider algorithms for primality testing. In practice, they are divided into two main types. Deterministic and probabilistic algorithms. Deterministic Primality Testing The most elementary approach to primality proving is trial division. If attempt to divide p by every integer n ≤ ⌊√p⌋ and no such n divides p, then p is prime. But this task will take O (√p M(log p)) time complexity, which is impractical for large values of p. That is why the most practical algorithms have to be used to deter the big numbers divisibility to factors. An algorithm created in 2002, AKS (Agrawal, Kayal, and Saxena), falls into the group of tests that give an unambiguous assessment of divisibility of numbers. At the heart of AKS algorithm is Fermat's Little Theorem. The Fermat's Little Theorem states that: if a number p is prime, a ∈ Z, p ∈ N and GCD(a, p) = 1 then a ≡ a mod p The primality test by using this theorem fails for a specific class of numbers, known as pseudoprimes, which include the Carmichael numbers. Primarily based on a polynomial generalization of the Fermat’s Little Theorem the AKS algorithm state that: the number p is prime if and only (x + a) ≡ (x + a) mod p Public Key Generation Principles Impact Cybersecurity 251 where a ∈ Z, p ∈ N. The time complexity here would be Ω(n) which is not polynomial time. To reduce complexity, we can divide both sides by (x − 1). Therefore, for a chosen r the number of computations needed to be performed is less. Hence, the main objective now is to choose an appropriately small r and test if the equation: (x + a) ≡ (x + a) mod GCD(x − 1, p) is satisfied for sufficient number of a’s. The algorithm proposed by Agrawal, Kayal and Saxena 8,9 for primality testing has following steps: (1) if p = a for a ∈ N, b > 1 output COMPOSIT (2) find smallest r such that Or(p) > log p (3) if 1 < GCD(a, p) < p for some a ≤ r then output COMPOSIT (4) if n ≤ r output PRIME (5) for each a ∈ 1. . ⌊√φ(p) log p⌋ (6) if (x + a) ≠ (x + a) mod GCD(x − 1, p) (7) output COMPOSIT (8) output PRIME The complexity of execution of that algorithm is ?̃?(logn) time. Hence the execution time will be proportional to (log n) if p grows larger. This is a polynomial time function, which although not as fast as the probabilistic tests used nowadays, has the advantage of being fully deterministic. Probabilistic Testing of Prime Numbers. Miller-Rabin Primality Test We know two mathematical ways to prove that a number p is composite: a. number p factorization, where: p = a. b and a, b > 1 (2.2.1) b. Exhibit a Fermat witness for p, i.e. find a number x satisfying: x ≢ 1 mod p (2.2.2) The speed of these algorithms, which certainly determine whether a number is divisible, is unsatisfactory. This requires some of the probabilistic algorithms for primality test to be more widely used. The Miller-Rabin 10,11 test is based on a third way to prove that a number is composite. c. Exhibit a no square root of 1 mod p. That means to find a number x such that: x ≡ 1 mod p and x ≢ ±1 mod p (2.2.3) N. Stoianov & A. Ivanov, ISIJ 47, no. 2 (2020): 249-260 252 The Miller-Rabin test is the most widely used probabilistic primality test. This algorithm was proposed in 70’s. Miller and Rabin gave two versions of the same algorithm to test whether a number p is prime or not. Rabin’s algorithm works with a randomly chosen x ∈ Zp, and is therefore a randomized one. Correctness of Miller’s algorithm depends on correctness of Extended Riemann Hypothesis. In his test method it is need to tests deterministically for all x’s, where 1 < x < 4. logp. If x is a witness for an integer p, then p must be composite and we say that x witnesses the compositeness of p. Prime numbers clearly have no witnesses. If we picked up enough count of x’s (100 or more depends on size of p) and no one is a witness, we can accept that number p is probably prime. The algorithm realization steps are: (1) Denote p − 1 = s. 2, where s mod 2 = 1 (2) randomly picked up x ∈ Zp (3) if x ≢ 1 mod p output COMPOSITE (p is definitely composite) (4) b = x mod p (5) if b ≡ ±1 mod p , output PRIME (x is not a witness, p could be prime) (6) Loop i ∈0..m-1 (7) b ← b mod p (8) if b ≡ −1 mod p then (9) output PRIME (x is not a witness, p could be prime) (10) output COMPOSITE x is a witness p, is definitely not prime The time complexity of that algorithm is ?̃?(y. logn) where the y is the count of the iterations i.e. the different values of randomly chosen x. Subgroup Extending by New Generating Number of a Ring is Gained. In this part of the paper we will considering primality test based on two criteria and an idea of a method of transitioning (without intersection) or extending different multiplicative subgroups formed by their generator integer. To describe this method of transitioning/extending multiplicative subgroups formed by number p, we will use linear Diophantine equation: dx . dy − p. k = dz (2.3.1) where di = g i mod p. Every di ∈ Zp and it is part of a ring generated by number g and has ring order #O(g,p) or smaller. In the particular case when dx = dy −1 mod p, value of dz = 1. In our practice dealing with number rings we saw that if dz = 1 quadratic reciprocity ( dx p ) = ( dy p ), and when size of #O(g,p) < p − 1 then number k could highly has different quadratic reciprocity, i.e. ( dx p ) ≠ ( k p ). In cases when that is not true, we can just do that with new value of dx ← k and in few steps of repeating that we can reach a value of k which has different quadratic reciprocity Public Key Generation Principles Impact Cybersecurity 253 to p than the initial dx. We saw more important thing, that the rings formed with generators dx and k, have very often different elements on its sets. If we use q = dx . k mod p as a ring generator its #O(q,p) is different then #O(dx,p) and #O(k,p). To show that we will use two examples. Into the first we will use a prime number with value 1117 and in the second one composite number 21421 = 11 . 1931 . Example 1 p = 1117 , g = 430 , #O(p,p) = 372 , ( 43","PeriodicalId":159156,"journal":{"name":"Information & Security: An International Journal","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information & Security: An International Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11610/isij.4717","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
A R T I C L E I N F O : RECEIVED: 22 JUNE 2020 REVISED: 08 SEP 2020 ONLINE: 22 SEP 2020 K E Y W O R D S : public key cryptography, Miller–Rabin primality test improvement, cybersecurity Creative Commons BY-NC 4.0 Introduction Nowadays the growing use of online communications over the Internet and the associated threats to the data we exchange, requires sufficient and reliable protection of the information exchanged. One of the most reliable and basic method to make information secure, when two communicating parties don't know each other, is public key cryptography. Hard-to-solve mathematical problems are used to realize the mathematical foundations of the existing algorithms using public key cryptography. 2 In this mathematics, integer operations with large numbers are used and are based on modulo calculations of large prime numbers. Large prime numbers are also used to produce the user’s cryptographic keys (public and private). N. Stoianov & A. Ivanov, ISIJ 47, no. 2 (2020): 249-260 250 We could state that the security of the exchanged data protected by public key cryptography is due to two main facts: the difficulty of solving a mathematical algorithm and the reliability of the generated prime numbers used as keys in such a system. In this paper we will consider deterministic and probabilistic primality tests and will focus over the most widely used in practice an algorithm for testing prime numbers, that of Miller-Rabin 3, 4 and we will propose a new addition to it, which will increase the reliability of the estimation that this probabilistic algorithm gives. Public Key Generation and The Importance of The Prime Numbers Public key cryptography algorithms are based on two main things: difficult to solve mathematical problems and prime numbers with big values that serve as user private keys. If that prime numbers are generated not according to the prescribed rules or are not reliably confirmed as such, the security strength of protected data could be not enough. In an effort to ensure better protection of information, new algorithms and rules for generating private keys and the prime numbers involved in their compilation are created and proposed. 6 The more than 85% from certificate authorities (CA) based their root certificate security by using RSA encryption and signing scheme. Approximately of 10% of CA combinate both RSA and ECDSA cryptographic schemes to protect their public key infrastructure (PKI). This statement is based on our study in which we analysed the certificates stored into Windows, Android and Linux operating systems (OS) certificates stores. These operating systems are the most commonly used worldwide. We can say that a reliable estimate of divisibility of numbers is essential. Connected with this we will consider algorithms for primality testing. In practice, they are divided into two main types. Deterministic and probabilistic algorithms. Deterministic Primality Testing The most elementary approach to primality proving is trial division. If attempt to divide p by every integer n ≤ ⌊√p⌋ and no such n divides p, then p is prime. But this task will take O (√p M(log p)) time complexity, which is impractical for large values of p. That is why the most practical algorithms have to be used to deter the big numbers divisibility to factors. An algorithm created in 2002, AKS (Agrawal, Kayal, and Saxena), falls into the group of tests that give an unambiguous assessment of divisibility of numbers. At the heart of AKS algorithm is Fermat's Little Theorem. The Fermat's Little Theorem states that: if a number p is prime, a ∈ Z, p ∈ N and GCD(a, p) = 1 then a ≡ a mod p The primality test by using this theorem fails for a specific class of numbers, known as pseudoprimes, which include the Carmichael numbers. Primarily based on a polynomial generalization of the Fermat’s Little Theorem the AKS algorithm state that: the number p is prime if and only (x + a) ≡ (x + a) mod p Public Key Generation Principles Impact Cybersecurity 251 where a ∈ Z, p ∈ N. The time complexity here would be Ω(n) which is not polynomial time. To reduce complexity, we can divide both sides by (x − 1). Therefore, for a chosen r the number of computations needed to be performed is less. Hence, the main objective now is to choose an appropriately small r and test if the equation: (x + a) ≡ (x + a) mod GCD(x − 1, p) is satisfied for sufficient number of a’s. The algorithm proposed by Agrawal, Kayal and Saxena 8,9 for primality testing has following steps: (1) if p = a for a ∈ N, b > 1 output COMPOSIT (2) find smallest r such that Or(p) > log p (3) if 1 < GCD(a, p) < p for some a ≤ r then output COMPOSIT (4) if n ≤ r output PRIME (5) for each a ∈ 1. . ⌊√φ(p) log p⌋ (6) if (x + a) ≠ (x + a) mod GCD(x − 1, p) (7) output COMPOSIT (8) output PRIME The complexity of execution of that algorithm is ?̃?(logn) time. Hence the execution time will be proportional to (log n) if p grows larger. This is a polynomial time function, which although not as fast as the probabilistic tests used nowadays, has the advantage of being fully deterministic. Probabilistic Testing of Prime Numbers. Miller-Rabin Primality Test We know two mathematical ways to prove that a number p is composite: a. number p factorization, where: p = a. b and a, b > 1 (2.2.1) b. Exhibit a Fermat witness for p, i.e. find a number x satisfying: x ≢ 1 mod p (2.2.2) The speed of these algorithms, which certainly determine whether a number is divisible, is unsatisfactory. This requires some of the probabilistic algorithms for primality test to be more widely used. The Miller-Rabin 10,11 test is based on a third way to prove that a number is composite. c. Exhibit a no square root of 1 mod p. That means to find a number x such that: x ≡ 1 mod p and x ≢ ±1 mod p (2.2.3) N. Stoianov & A. Ivanov, ISIJ 47, no. 2 (2020): 249-260 252 The Miller-Rabin test is the most widely used probabilistic primality test. This algorithm was proposed in 70’s. Miller and Rabin gave two versions of the same algorithm to test whether a number p is prime or not. Rabin’s algorithm works with a randomly chosen x ∈ Zp, and is therefore a randomized one. Correctness of Miller’s algorithm depends on correctness of Extended Riemann Hypothesis. In his test method it is need to tests deterministically for all x’s, where 1 < x < 4. logp. If x is a witness for an integer p, then p must be composite and we say that x witnesses the compositeness of p. Prime numbers clearly have no witnesses. If we picked up enough count of x’s (100 or more depends on size of p) and no one is a witness, we can accept that number p is probably prime. The algorithm realization steps are: (1) Denote p − 1 = s. 2, where s mod 2 = 1 (2) randomly picked up x ∈ Zp (3) if x ≢ 1 mod p output COMPOSITE (p is definitely composite) (4) b = x mod p (5) if b ≡ ±1 mod p , output PRIME (x is not a witness, p could be prime) (6) Loop i ∈0..m-1 (7) b ← b mod p (8) if b ≡ −1 mod p then (9) output PRIME (x is not a witness, p could be prime) (10) output COMPOSITE x is a witness p, is definitely not prime The time complexity of that algorithm is ?̃?(y. logn) where the y is the count of the iterations i.e. the different values of randomly chosen x. Subgroup Extending by New Generating Number of a Ring is Gained. In this part of the paper we will considering primality test based on two criteria and an idea of a method of transitioning (without intersection) or extending different multiplicative subgroups formed by their generator integer. To describe this method of transitioning/extending multiplicative subgroups formed by number p, we will use linear Diophantine equation: dx . dy − p. k = dz (2.3.1) where di = g i mod p. Every di ∈ Zp and it is part of a ring generated by number g and has ring order #O(g,p) or smaller. In the particular case when dx = dy −1 mod p, value of dz = 1. In our practice dealing with number rings we saw that if dz = 1 quadratic reciprocity ( dx p ) = ( dy p ), and when size of #O(g,p) < p − 1 then number k could highly has different quadratic reciprocity, i.e. ( dx p ) ≠ ( k p ). In cases when that is not true, we can just do that with new value of dx ← k and in few steps of repeating that we can reach a value of k which has different quadratic reciprocity Public Key Generation Principles Impact Cybersecurity 253 to p than the initial dx. We saw more important thing, that the rings formed with generators dx and k, have very often different elements on its sets. If we use q = dx . k mod p as a ring generator its #O(q,p) is different then #O(dx,p) and #O(k,p). To show that we will use two examples. Into the first we will use a prime number with value 1117 and in the second one composite number 21421 = 11 . 1931 . Example 1 p = 1117 , g = 430 , #O(p,p) = 372 , ( 43
A R T I C L EI N F O:接收日期:2020年6月22日修订日期:2020年9月8日在线日期:2020年9月22日发布日期:2020年9月22日发布日期:2020年9月22日在通信双方互不认识的情况下,保证信息安全的最可靠和最基本的方法之一是公钥加密。利用难以解决的数学问题,利用公钥加密技术实现现有算法的数学基础。在这个数学中,使用大数的整数运算,并基于对大素数的模计算。大素数也用于生成用户的加密密钥(公共和私有)。N. Stoianov和A. Ivanov, ISIJ 47, no. 5。2(2020): 249-260 250我们可以说,受公钥加密保护的交换数据的安全性主要是由于两个事实:求解数学算法的难度以及在这种系统中用作密钥的生成素数的可靠性。在本文中,我们将考虑确定性素数检验和概率素数检验,并将重点放在实践中使用最广泛的一种素数检验算法上,即Miller-Rabin 3, 4算法,我们将提出一个新的补充,这将提高该概率算法给出的估计的可靠性。公钥加密算法主要基于两个方面:难以解决的数学问题和作为用户私钥的大素数。如果质数不是按照规定的规则生成的,或者没有得到可靠的确认,则受保护数据的安全强度可能不够。为了确保更好地保护信息,我们创建并提出了新的算法和规则,用于生成私钥和编译过程中涉及的素数。6 .超过85%的来自CA (certificate authority)的根证书安全基于RSA加密和签名方案。大约10%的CA结合RSA和ECDSA加密方案来保护他们的公钥基础设施(PKI)。这一说法是基于我们的研究,我们分析了存储在Windows、Android和Linux操作系统(OS)证书存储库中的证书。这些操作系统是全球最常用的。我们可以说,对数的可整除性的可靠估计是必不可少的。与此相关,我们将考虑素数测试的算法。在实践中,它们主要分为两种类型。确定性和概率算法。确定性素数检验证明素数的最基本方法是试除法。如果试图将p除以任何整数n≤⌊√p⌋,且n不能整除p,则p是素数。但是这个任务需要O(√p M(log p))的时间复杂度,这对于大的p值来说是不切实际的。这就是为什么最实用的算法必须用来阻止大数被因子整除的原因。2002年创建的一种算法AKS (Agrawal, Kayal和Saxena)属于一组对数字的可整除性给出明确评估的测试。AKS算法的核心是费马小定理。费马小定理指出:如果一个数p是素数,且a∈Z, p∈N且GCD(a, p) = 1,则a≡a mod p。使用这个定理的素数检验对于一类特定的数是不成立的,这些数被称为伪素数,包括卡迈克尔数。主要基于费马小定理的多项式推广,AKS算法声明:数字p是素数当且仅当(x + a)≡(x + a) mod p其中a∈Z, p∈n的公钥生成原理影响网络安全251,这里的时间复杂度为Ω(n),不是多项式时间。为了降低复杂度,我们可以在两边除以(x−1)。因此,对于选定的r,需要执行的计算次数较少。因此,现在的主要目标是选择一个适当的小r,并测试方程:(x + a)≡(x + a) mod GCD(x−1,p)是否满足足够数量的a。Agrawal, Kayal和Saxena 8,9提出的素数检验算法有以下步骤:(1)对于a∈N,如果p = a, b > 1输出COMPOSIT(2)找到最小的r,使得Or(p) > log p(3)如果1 < GCD(a, p) < p对于某些a≤r则输出COMPOSIT(4)如果N≤r对于每个a∈1输出PRIME(5)。⌊√φ(p) log p⌋(6)if (x + a)≠(x + a) mod GCD(x−1,p) (7) output COMPOSIT (8) output PRIME该算法的执行复杂度为?因此,如果p变大,执行时间将与(log n)成正比。 这是一个多项式时间函数,虽然不如现在使用的概率测试快,但它具有完全确定性的优点。素数的概率检验。我们知道两种数学方法来证明一个数p是合数:a.数p分解,其中:p = a. b和a, b >1 (2.2.1) b.展示p的费马见证,即找到一个数x满足:x 1 mod p(2.2.2)这些算法的速度是不令人满意的,它们肯定地决定了一个数是否可整除。这就要求一些用于素数检验的概率算法得到更广泛的应用。米勒-拉宾10,11检验是基于第三种方法来证明一个数是合数。c.给出一个1模p的无平方根。这意味着找到一个数x使x≡1模p和x±1模p (2.2.3) N. Stoianov & a . Ivanov, ISIJ 47, no. 2。2 (2020): 249-260 252 Miller-Rabin检验是应用最广泛的概率原态检验。该算法是在70年代提出的。米勒和拉宾给出了两个版本的相同算法来测试一个数字p是否是素数。Rabin的算法处理随机选择的x∈Zp,因此是随机的。米勒算法的正确性取决于扩展黎曼假设的正确性。在他的测试方法中,需要对所有x进行确定性测试,其中1 < x < 4。logp。如果x是整数p的见证,那么p一定是合数,我们说x见证了p的合数,质数显然没有见证。如果我们收集了足够多的x(100或更多取决于p的大小),并且没有人是目击者,我们可以接受p可能是素数。算法实现步骤为:(1)表示p−1 = s. 2,其中s mod 2 = 1(2)随机拾取x∈Zp(3)如果x≠1 mod p输出COMPOSITE (p肯定是COMPOSITE) (4) b = x mod p(5)如果b≡±1 mod p,输出PRIME (x不是见证,p可以是素数)(6)Loop i∈0..m-1 (7) b←b mod p(8)如果b≡-1 mod p则(9)输出PRIME (x不是见证,p可以是素数)(10)输出COMPOSITE x是见证p,绝对不是素数,该算法的时间复杂度为?logn),其中y是迭代的次数,即随机选择的x的不同值。通过环的新生成数得到子群扩展。在这一部分中,我们将考虑基于两个准则的素数检验,以及由它们的生成整数构成的不同的乘法子群的转移(无交集)或扩展方法的思想。为了描述这种由数字p构成的乘法子群的转移/扩展方法,我们将使用线性丢番图方程:dx。dy−p k = dz(2.3.1),其中di = g i mod p。每个di∈Zp,它是由数g生成的环的一部分,并且环阶为#O(g,p)或更小。在dx = dy - 1 mod p的特殊情况下,dz = 1。在我们处理数环的实践中,我们看到如果dz = 1二次互易性(dx p) = (dy p),并且当#O(g,p)的大小< p - 1时,则数k可以高度具有不同的二次互易性,即(dx p)≠(kp)。在这种情况下,我们可以用dx←k的新值来做到这一点,并且在重复的几个步骤中,我们可以达到k的值,它具有不同的二次互反性,而不是初始的dx。我们看到了更重要的东西,由发生器dx和k组成的环,在它的集合中通常有不同的元素。如果用q = dx。k对p取模作为环发生器,它的#O(q,p)与#O(dx,p)和#O(k,p)不同。为了说明这一点,我们将使用两个例子。在第一个示例中,我们将使用值为1117的素数,在第二个示例中,我们将使用合数21421 = 11。1931年。例1 p = 1117, g = 430, #O(p,p) = 372, (43 .