Pub Date : 2022-12-30DOI: 10.30837/rt.2023.4.211.04
D. Harmash
The article discusses the analysis of the essence and protection possibilities of the Falcon post-quantum signature. The main properties of the Falcon signature are considered. An estimate of what resources and computing power is required to use successfully the Falcon signature. A structural analysis of the Falcon signature is performed. The GPV and Rabina frameworks are analyzed. Detailed conclusions are made regarding the conducted analyses. The stability and complexity of the GPV and Rabin frameworks are evaluated, the main structures and protocols of these frameworks are considered. A detailed analysis of the main properties of NTRU lattices is carried out, the main rules of factorization of the GPV and Rabin frameworks are considered. Fast Fourier sampling is investigated. Conclusions are made regarding each conducted study.
{"title":"Analysis of the Falcon signature compared to other signatures. GPV and Rabin frameworks","authors":"D. Harmash","doi":"10.30837/rt.2023.4.211.04","DOIUrl":"https://doi.org/10.30837/rt.2023.4.211.04","url":null,"abstract":"The article discusses the analysis of the essence and protection possibilities of the Falcon post-quantum signature. The main properties of the Falcon signature are considered. An estimate of what resources and computing power is required to use successfully the Falcon signature. A structural analysis of the Falcon signature is performed. The GPV and Rabina frameworks are analyzed. Detailed conclusions are made regarding the conducted analyses. The stability and complexity of the GPV and Rabin frameworks are evaluated, the main structures and protocols of these frameworks are considered. A detailed analysis of the main properties of NTRU lattices is carried out, the main rules of factorization of the GPV and Rabin frameworks are considered. Fast Fourier sampling is investigated. Conclusions are made regarding each conducted study.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81993581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.13
M.A. Yasnohorodskyi
Thermophotovoltaics (TPV) is a process by which photons emitted by a heat emitter are converted into electrical energy by a photovoltaic cell. Selective heat emitters that can survive temperatures at or above 1000°C have the potential to significantly improve the energy conversion efficiency of a PV cell by limiting the emission of photons with energies below the band gap energy of a photovoltaic cell. Waste heat can be a valuable source of energy if we can find a way to harvest it efficiently. Deviations from ideal absorption and ideal blackbody behavior lead to light losses. For selective emitters, any light emitted at wavelengths outside the bandgap energy of the photovoltaic system may not be efficiently converted, reducing efficiency. In particular, it is difficult to avoid emission associated with phonon resonance for wavelengths in the deep infrared, which cannot be practically converted. An ideal emitter would not emit light at wavelengths other than the bandgap energy, and much TFP research is devoted to designing emitters that approximate better this narrow emission spectrum. TPV systems usually consist of a heat source, a radiator and a waste heat removal system. TFV cells are placed between the emitter, often a metal or similar block, and the cooling system, often a passive radiator. Efficiency, heat resistance and cost are the three main factors for choosing a TPF emitter. The efficiency is determined by the absorbed energy relative to the incoming radiation. High temperature operation is critical because efficiency increases with operating temperature. As the temperature of the emitter increases, the radiation of the black body shifts toward shorter waves, which allows for more efficient absorption by photocells. This paper demonstrates the feasibility of using materials such as platinum, gold, and nichrome as a metal component in a metamaterial emitter with respect to their absorption and thermal stability.
{"title":"The use of various materials as a metal component in a metamaterial thermophotovoltaic emitter","authors":"M.A. Yasnohorodskyi","doi":"10.30837/rt.2022.3.210.13","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.13","url":null,"abstract":"Thermophotovoltaics (TPV) is a process by which photons emitted by a heat emitter are converted into electrical energy by a photovoltaic cell. Selective heat emitters that can survive temperatures at or above 1000°C have the potential to significantly improve the energy conversion efficiency of a PV cell by limiting the emission of photons with energies below the band gap energy of a photovoltaic cell. \u0000Waste heat can be a valuable source of energy if we can find a way to harvest it efficiently. Deviations from ideal absorption and ideal blackbody behavior lead to light losses. For selective emitters, any light emitted at wavelengths outside the bandgap energy of the photovoltaic system may not be efficiently converted, reducing efficiency. In particular, it is difficult to avoid emission associated with phonon resonance for wavelengths in the deep infrared, which cannot be practically converted. An ideal emitter would not emit light at wavelengths other than the bandgap energy, and much TFP research is devoted to designing emitters that approximate better this narrow emission spectrum. \u0000TPV systems usually consist of a heat source, a radiator and a waste heat removal system. TFV cells are placed between the emitter, often a metal or similar block, and the cooling system, often a passive radiator. \u0000Efficiency, heat resistance and cost are the three main factors for choosing a TPF emitter. The efficiency is determined by the absorbed energy relative to the incoming radiation. High temperature operation is critical because efficiency increases with operating temperature. As the temperature of the emitter increases, the radiation of the black body shifts toward shorter waves, which allows for more efficient absorption by photocells. This paper demonstrates the feasibility of using materials such as platinum, gold, and nichrome as a metal component in a metamaterial emitter with respect to their absorption and thermal stability.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85765707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.07
Y. Kotukh, V. Lubchak, O. Strakh
A wireless sensor network (WSN) is a group of "smart" sensors with a wireless infrastructure designed to monitor the environment. This technology is the basic concept of the Internet of Things (IoT). The WSN can transmit confidential information while working in an insecure environment. Therefore, appropriate security measures must be considered in the network design. However, computational node constraints, limited storage space, an unstable power supply, and unreliable communication channels, and unattended operations are significant barriers to the application of cybersecurity techniques in these networks. This paper considers a new continuous-discrete model of malware propagation through wireless sensor network nodes, which is based on a system of so-called dynamic equations with impulsive effect on time scales.
{"title":"New continuous-discrete model for wireless sensor networks security","authors":"Y. Kotukh, V. Lubchak, O. Strakh","doi":"10.30837/rt.2022.3.210.07","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.07","url":null,"abstract":"A wireless sensor network (WSN) is a group of \"smart\" sensors with a wireless infrastructure designed to monitor the environment. This technology is the basic concept of the Internet of Things (IoT). The WSN can transmit confidential information while working in an insecure environment. Therefore, appropriate security measures must be considered in the network design. However, computational node constraints, limited storage space, an unstable power supply, and unreliable communication channels, and unattended operations are significant barriers to the application of cybersecurity techniques in these networks. This paper considers a new continuous-discrete model of malware propagation through wireless sensor network nodes, which is based on a system of so-called dynamic equations with impulsive effect on time scales.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90249055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.02
Y. Gorbenko, S.O. Kandii
The study of key encapsulation mechanisms on algebraic lattices is one of the important directions in modern post-quantum cryptography, since many mechanisms are already either standardized (ANSI X.9.98, DSTU 8961:2019 "Skelya") or are promising candidates for standardization (CRYSTALS-Kyber, FrodoKEM). The purpose of this work is to compare the security arguments of DSTU 8961:2019 "Skelya", CRYSTALS-Kyber, FrodoKEM key encapsulation mechanisms. The paper provides a comparison of theoretical evidence in the idealized random oracle (ROM) and quantum random oracle (QROM) models, as well as a comparison of specific values of security parameters in the core-SVP model, which is, in fact, a standard for lattice cryptography. Since all three key encapsulation mechanisms are based on different complex problems (NTRU, Module-LWE, LWE), a comparison of complex lattice theory problems and a comparison of their security arguments are additionally given. The strengths and weaknesses of the considered key encapsulation mechanisms are shown, and areas of research that require more detailed attention are highlighted.
{"title":"Comparison of security arguments of promising key encapsulation mechanisms","authors":"Y. Gorbenko, S.O. Kandii","doi":"10.30837/rt.2022.3.210.02","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.02","url":null,"abstract":"The study of key encapsulation mechanisms on algebraic lattices is one of the important directions in modern post-quantum cryptography, since many mechanisms are already either standardized (ANSI X.9.98, DSTU 8961:2019 \"Skelya\") or are promising candidates for standardization (CRYSTALS-Kyber, FrodoKEM). The purpose of this work is to compare the security arguments of DSTU 8961:2019 \"Skelya\", CRYSTALS-Kyber, FrodoKEM key encapsulation mechanisms. The paper provides a comparison of theoretical evidence in the idealized random oracle (ROM) and quantum random oracle (QROM) models, as well as a comparison of specific values of security parameters in the core-SVP model, which is, in fact, a standard for lattice cryptography. Since all three key encapsulation mechanisms are based on different complex problems (NTRU, Module-LWE, LWE), a comparison of complex lattice theory problems and a comparison of their security arguments are additionally given. The strengths and weaknesses of the considered key encapsulation mechanisms are shown, and areas of research that require more detailed attention are highlighted.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90302367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.05
M. Yesina, Ye. V. Ostrianska, I. Gorbenko
In recent years, there has been steady progress in the creation of quantum computers. If large-scale quantum computers are implemented, they will threaten the security of many widely used public-key cryptosystems. Key-establishment schemes and digital signatures based on factorization, discrete logarithms, and elliptic curve cryptography will be most affected. Symmetric cryptographic primitives such as block ciphers and hash functions will be broken only slightly. As a result, there has been an intensification of research on finding public-key cryptosystems that would be secure against cryptanalysts with both quantum and classical computers. This area is often called post-quantum cryptography (PQC), or sometimes quantum-resistant cryptography. The goal is to design schemes that can be deployed in existing communication networks and protocols without significant changes. The National Institute of Standards and Technology is in the process of selecting one or more public-key cryptographic algorithms through an open competition. New public-key cryptography standards will define one or more additional digital signatures, public-key encryption, and key-establishment algorithms. It is assumed that these algorithms will be able to protect confidential information well in the near future, including after the advent of quantum computers. After three rounds of evaluation and analysis, NIST has selected the first algorithms that will be standardized as a result of the PQC standardization process. The purpose of this article is to review and analyze the state of NIST's post-quantum cryptography standardization evaluation and selection process. The article summarizes each of the 15 candidate algorithms from the third round and identifies the algorithms selected for standardization, as well as those that will continue to be evaluated in the fourth round of analysis. Although the third round is coming to an end and NIST will begin developing the first PQC standards, standardization efforts in this area will continue for some time. This should not be interpreted as meaning that users should wait to adopt post-quantum algorithms. NIST looks forward to the rapid implementation of these first standardized algorithms and will issue future guidance on the transition. The transition will undoubtedly have many complexities, and there will be challenges for some use cases such as IoT devices or certificate transparency.
{"title":"Status report on the third round of the NIST post-quantum cryptography standardization process","authors":"M. Yesina, Ye. V. Ostrianska, I. Gorbenko","doi":"10.30837/rt.2022.3.210.05","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.05","url":null,"abstract":"In recent years, there has been steady progress in the creation of quantum computers. If large-scale quantum computers are implemented, they will threaten the security of many widely used public-key cryptosystems. Key-establishment schemes and digital signatures based on factorization, discrete logarithms, and elliptic curve cryptography will be most affected. Symmetric cryptographic primitives such as block ciphers and hash functions will be broken only slightly. As a result, there has been an intensification of research on finding public-key cryptosystems that would be secure against cryptanalysts with both quantum and classical computers. This area is often called post-quantum cryptography (PQC), or sometimes quantum-resistant cryptography. The goal is to design schemes that can be deployed in existing communication networks and protocols without significant changes. The National Institute of Standards and Technology is in the process of selecting one or more public-key cryptographic algorithms through an open competition. New public-key cryptography standards will define one or more additional digital signatures, public-key encryption, and key-establishment algorithms. It is assumed that these algorithms will be able to protect confidential information well in the near future, including after the advent of quantum computers. After three rounds of evaluation and analysis, NIST has selected the first algorithms that will be standardized as a result of the PQC standardization process. The purpose of this article is to review and analyze the state of NIST's post-quantum cryptography standardization evaluation and selection process. The article summarizes each of the 15 candidate algorithms from the third round and identifies the algorithms selected for standardization, as well as those that will continue to be evaluated in the fourth round of analysis. Although the third round is coming to an end and NIST will begin developing the first PQC standards, standardization efforts in this area will continue for some time. This should not be interpreted as meaning that users should wait to adopt post-quantum algorithms. NIST looks forward to the rapid implementation of these first standardized algorithms and will issue future guidance on the transition. The transition will undoubtedly have many complexities, and there will be challenges for some use cases such as IoT devices or certificate transparency.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74997365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.04
V. Yesin, V. Vilihura
Currently, many users prefer to outsource data to third-party cloud servers in order to mitigate the load of local storage. However, storing sensitive data on remote servers creates security challenges and is a source of concern for data owners. With ever-growing security and privacy concerns, it is becoming increasingly important to encrypt data stored remotely. However, the use of traditional encryption prevents the search operation in the encrypted data. One approach to solving this problem is searchable encryption. Solutions for search in secure databases cover a wide range of cryptographic techniques, although there is still no dominant solution. Designing secure search systems is a balance between security, functionality, performance, and usability. Therefore, this paper provides an overview of some of the important current secure search solutions. The main searchable encryption systems of databases that support SQL are considered. The strengths and weaknesses of the analyzed systems and the techniques implemented in them are highlighted. A comparative analysis of some characteristics of the compared systems is given. Attention is drawn to the fact that the ability to perform search operations in encrypted data leads to a complication of systems, an increase in the amount of required memory and query execution time. All this indicates the openness of the protected search problem and the need for further research in this direction to ensure secure work with remote databases and data warehouses.
{"title":"Researching basic searchable encryption schemes in databases that support SQL","authors":"V. Yesin, V. Vilihura","doi":"10.30837/rt.2022.3.210.04","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.04","url":null,"abstract":"Currently, many users prefer to outsource data to third-party cloud servers in order to mitigate the load of local storage. However, storing sensitive data on remote servers creates security challenges and is a source of concern for data owners. With ever-growing security and privacy concerns, it is becoming increasingly important to encrypt data stored remotely. However, the use of traditional encryption prevents the search operation in the encrypted data. One approach to solving this problem is searchable encryption. Solutions for search in secure databases cover a wide range of cryptographic techniques, although there is still no dominant solution. Designing secure search systems is a balance between security, functionality, performance, and usability. Therefore, this paper provides an overview of some of the important current secure search solutions. The main searchable encryption systems of databases that support SQL are considered. The strengths and weaknesses of the analyzed systems and the techniques implemented in them are highlighted. A comparative analysis of some characteristics of the compared systems is given. Attention is drawn to the fact that the ability to perform search operations in encrypted data leads to a complication of systems, an increase in the amount of required memory and query execution time. All this indicates the openness of the protected search problem and the need for further research in this direction to ensure secure work with remote databases and data warehouses.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78901255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.16
O. Romanov, I. Svyd, N. Korniienko, A.O. Romanov
The possibilities to manage the optical network with a logically centralized SDN control plane based on the Open Network Operating System (ONOS) are investigated. The structure of the controller and its main functional blocks are considered ensuring the collection of information about the state of network elements, the solution of the main control tasks, interaction of control systems built on different technological bases, are considered. The role and place of the open network operating system in the controller structure are shown, the description of the ONOS multilevel architecture in the form of a set of functional modules is given, the purpose and functions of the ONOS subsystems are analyzed, protocols and interfaces making it possible to present the SDN network as a model are described. The peculiarity of the model is that the managed network can be represented as a set of virtual network functions. Therefore, the control process becomes independent of which vendor's equipment was used to build the network, as well as whether the network is built on real physical elements or virtual ones. Using the ONOS allows you to build a logical centralized control plane in the SDN networks. The existing set of functional modules, services and interfaces in the ONOS allows you to perform optical network management tasks. For the further development of the ONOS, it is necessary to develop mathematical models and methods for the optimal solution of control problems in various operating conditions, which will become application-level software modules in the future.
{"title":"Optical Network Management by ONOS-Based SDN Controller","authors":"O. Romanov, I. Svyd, N. Korniienko, A.O. Romanov","doi":"10.30837/rt.2022.3.210.16","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.16","url":null,"abstract":"The possibilities to manage the optical network with a logically centralized SDN control plane based on the Open Network Operating System (ONOS) are investigated. The structure of the controller and its main functional blocks are considered ensuring the collection of information about the state of network elements, the solution of the main control tasks, interaction of control systems built on different technological bases, are considered. The role and place of the open network operating system in the controller structure are shown, the description of the ONOS multilevel architecture in the form of a set of functional modules is given, the purpose and functions of the ONOS subsystems are analyzed, protocols and interfaces making it possible to present the SDN network as a model are described. The peculiarity of the model is that the managed network can be represented as a set of virtual network functions. Therefore, the control process becomes independent of which vendor's equipment was used to build the network, as well as whether the network is built on real physical elements or virtual ones. Using the ONOS allows you to build a logical centralized control plane in the SDN networks. The existing set of functional modules, services and interfaces in the ONOS allows you to perform optical network management tasks. For the further development of the ONOS, it is necessary to develop mathematical models and methods for the optimal solution of control problems in various operating conditions, which will become application-level software modules in the future.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87457286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.03
Yaroslav Derevianko, I. Gorbenko
It is well known that quantum algorithms offer exponential speedup in solving the integer factorization and discrete logarithm problems that existing public-key systems rely on. Thus, post-quantum cryptography seeks alternative classical algorithms that can withstand quantum cryptanalysis. Growing concern about the quantum threat has prompted the National Institute of Standards and Technology (NIST) to invite and evaluate applications for a post-quantum cryptography standard, an ongoing process scheduled to be completed by 2023. Falcon is an electronic signature algorithm based on the mathematics of algebraic lattices. The disadvantage of this algorithm is the small number of studies of resistance against special attacks, as well as attacks through side channels. This material examines existing attacks on the implementation, and also analyzes the speed with applying countermeasures that would prevent such attacks. Although the Falcon scheme sampler, as well as certain mathematical transformations, are still vulnerable to attacks (which in turn allow the private key to be obtained), the efficiency of the components and mathematics of this signature algorithm make it competitive with other schemes, even with countermeasures against these attacks. The work will also consider the attack by side channels on the Falcon. Such an attack is a known-plaintext attack that uses the device's electromagnetic radiation to derive secret signature keys, which can then be used to forge signatures in arbitrary messages. The obtained results show that Falcon is quite vulnerable to side-channel attacks and does not yet have protection against such attacks in the proposed implementation. Because of this, standardization or implementation should consider the possibility of physical attacks, as well as options for countering such attacks.
{"title":"FALCON signature vulnerability to special attacks and its protection","authors":"Yaroslav Derevianko, I. Gorbenko","doi":"10.30837/rt.2022.3.210.03","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.03","url":null,"abstract":"It is well known that quantum algorithms offer exponential speedup in solving the integer factorization and discrete logarithm problems that existing public-key systems rely on. Thus, post-quantum cryptography seeks alternative classical algorithms that can withstand quantum cryptanalysis. Growing concern about the quantum threat has prompted the National Institute of Standards and Technology (NIST) to invite and evaluate applications for a post-quantum cryptography standard, an ongoing process scheduled to be completed by 2023. \u0000Falcon is an electronic signature algorithm based on the mathematics of algebraic lattices. The disadvantage of this algorithm is the small number of studies of resistance against special attacks, as well as attacks through side channels. \u0000This material examines existing attacks on the implementation, and also analyzes the speed with applying countermeasures that would prevent such attacks. Although the Falcon scheme sampler, as well as certain mathematical transformations, are still vulnerable to attacks (which in turn allow the private key to be obtained), the efficiency of the components and mathematics of this signature algorithm make it competitive with other schemes, even with countermeasures against these attacks. \u0000The work will also consider the attack by side channels on the Falcon. Such an attack is a known-plaintext attack that uses the device's electromagnetic radiation to derive secret signature keys, which can then be used to forge signatures in arbitrary messages. The obtained results show that Falcon is quite vulnerable to side-channel attacks and does not yet have protection against such attacks in the proposed implementation. Because of this, standardization or implementation should consider the possibility of physical attacks, as well as options for countering such attacks.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75242579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.06
Ye. V. Ostrianska, M. Yesina, I. Gorbenko
Virtually all asymmetric cryptographic schemes currently in use are threatened by the potential development of powerful quantum computers. Although there is currently no definite answer and it is very unclear when or even if CRQC will ever be built and the gap between modern quantum computers and the envisioned CRQC is huge, the risk of creating CRQC means that currently deployed public key cryptography must be replaced by quantum-resistant ones alternatives. For example, information encrypted using modern public key cryptography can be recorded by cryptanalysts and then attacked if a QRQC can be created. The potential harm that CRQC could cause is the basis of the motivation to seek countermeasures, even though we have uncertainties about when and if these computers can be built. Deployed systems that use public key cryptography can also take years to update. Post-quantum cryptography is one way to combat quantum computer threats. Its security is based on the complexity of mathematical problems that are currently considered unsolvable efficiently – even with the help of quantum computers. Post-quantum cryptography deals with the development and research of asymmetric cryptosystems, which, according to current knowledge, cannot be broken even by powerful quantum computers. These methods are based on mathematical problems for the solution of which neither efficient classical algorithms nor efficient quantum algorithms are known today. Various approaches to the implementation of post-quantum cryptography are used in modern research, including: code-based cryptography, lattice-based cryptography, hashing-based cryptography, isogeny-based cryptography, and multidimensional cryptography. The purpose of this work is to review the computational model of quantum computers; quantum algorithms, which have the greatest impact on modern cryptography; the risk of creating cryptographically relevant quantum computers (CRQC); security of symmetric cryptography and public key cryptography in the presence of CRQC; NIST PQC standardization efforts; transition to quantum-resistant public-key cryptography; relevance, views and current state of development of quantum-resistant cryptography in the European Union. It also highlights the progress of the most important effort in the field: NIST's standardization of post-quantum cryptography.
{"title":"Analysis of views of the European Union on quantum-post-quantum limitations","authors":"Ye. V. Ostrianska, M. Yesina, I. Gorbenko","doi":"10.30837/rt.2022.3.210.06","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.06","url":null,"abstract":"Virtually all asymmetric cryptographic schemes currently in use are threatened by the potential development of powerful quantum computers. Although there is currently no definite answer and it is very unclear when or even if CRQC will ever be built and the gap between modern quantum computers and the envisioned CRQC is huge, the risk of creating CRQC means that currently deployed public key cryptography must be replaced by quantum-resistant ones alternatives. For example, information encrypted using modern public key cryptography can be recorded by cryptanalysts and then attacked if a QRQC can be created. The potential harm that CRQC could cause is the basis of the motivation to seek countermeasures, even though we have uncertainties about when and if these computers can be built. Deployed systems that use public key cryptography can also take years to update. Post-quantum cryptography is one way to combat quantum computer threats. Its security is based on the complexity of mathematical problems that are currently considered unsolvable efficiently – even with the help of quantum computers. Post-quantum cryptography deals with the development and research of asymmetric cryptosystems, which, according to current knowledge, cannot be broken even by powerful quantum computers. These methods are based on mathematical problems for the solution of which neither efficient classical algorithms nor efficient quantum algorithms are known today. Various approaches to the implementation of post-quantum cryptography are used in modern research, including: code-based cryptography, lattice-based cryptography, hashing-based cryptography, isogeny-based cryptography, and multidimensional cryptography. The purpose of this work is to review the computational model of quantum computers; quantum algorithms, which have the greatest impact on modern cryptography; the risk of creating cryptographically relevant quantum computers (CRQC); security of symmetric cryptography and public key cryptography in the presence of CRQC; NIST PQC standardization efforts; transition to quantum-resistant public-key cryptography; relevance, views and current state of development of quantum-resistant cryptography in the European Union. It also highlights the progress of the most important effort in the field: NIST's standardization of post-quantum cryptography.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78797184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.30837/rt.2022.3.210.15
Oleg V. Lazorenko, A. A. Onishchenko, L. Chernogor
One of the main numerical characteristics used in numerous methods of fractal analysis is the corresponding fractal dimensions. The accuracy of estimating these dimensions in the vast majority of cases is quite small, which cannot satisfy, first of all, researchers-practitioners. The method of the corrective function is put forward, which makes it possible to compensate for the ever-existing nonlinearity of the dependence between the true value of the fractal dimension and its estimation, performed using the selected method of monofractal analysis of signals and processes for a known number of samples of the discrete data vector of the investigated signal. The main idea of the method is to build and apply a special correction function using a set of model fractal signals with previously known values of the fractal dimension. The mathematical bases of the new method are outlined. Features of the practical application of the corrective function method are considered on the example of the evaluation of regularization, boxing, variation and Hurst fractal dimensions. For them, the minimum values of the number of samples of the discrete data vector of the investigated signal, at which these dimensions can still be estimated, are defined. Using a set of model monofractal and multifractal signals on the example of the dynamical fractal analysis method, the effectiveness of the created method of the corrective function is shown. It is proven that due to the application of the correction function method, the maximum deviation of the estimated fractal dimension from the true known value for the specified dimensions is reduced from 25 – 55% to 5 – 7%.
{"title":"Corrective Function Method for the Fractal Analysis","authors":"Oleg V. Lazorenko, A. A. Onishchenko, L. Chernogor","doi":"10.30837/rt.2022.3.210.15","DOIUrl":"https://doi.org/10.30837/rt.2022.3.210.15","url":null,"abstract":"One of the main numerical characteristics used in numerous methods of fractal analysis is the corresponding fractal dimensions. The accuracy of estimating these dimensions in the vast majority of cases is quite small, which cannot satisfy, first of all, researchers-practitioners. The method of the corrective function is put forward, which makes it possible to compensate for the ever-existing nonlinearity of the dependence between the true value of the fractal dimension and its estimation, performed using the selected method of monofractal analysis of signals and processes for a known number of samples of the discrete data vector of the investigated signal. The main idea of the method is to build and apply a special correction function using a set of model fractal signals with previously known values of the fractal dimension. The mathematical bases of the new method are outlined. Features of the practical application of the corrective function method are considered on the example of the evaluation of regularization, boxing, variation and Hurst fractal dimensions. For them, the minimum values of the number of samples of the discrete data vector of the investigated signal, at which these dimensions can still be estimated, are defined. Using a set of model monofractal and multifractal signals on the example of the dynamical fractal analysis method, the effectiveness of the created method of the corrective function is shown. It is proven that due to the application of the correction function method, the maximum deviation of the estimated fractal dimension from the true known value for the specified dimensions is reduced from 25 – 55% to 5 – 7%.","PeriodicalId":41675,"journal":{"name":"Visnyk NTUU KPI Seriia-Radiotekhnika Radioaparatobuduvannia","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72489735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}