{"title":"Neural cryptography: vulnerabilities and attack strategies","authors":"L. Beshaj, Gaurav Tyagi","doi":"10.1117/12.3013669","DOIUrl":null,"url":null,"abstract":"A number of research papers has been published using the architecture of adversarial neural networks to prove that communication between two neural net based on synchronized input can be achieved, and without knowledge of this synchronized information these systems can not be breached. In this paper we will try to evaluate these adversarial neural net architectures when a third party gain access to partial secret key, or a noisy secret key, or has knowledge about loss function, or loss values itself, or activation functions used during training of encryption layers. We explore the cryptanalysis side of it in which we will focus on vulnerabilities a neural-net based cryptography network can face. This can be used in future to improve the current neural net based cryptography architectures. In this paper we show that while the encryption key is necessary to decrypt the messages in neural network domain, the adversarial neural networks can occasionally decrypt messages or raise a concern which will require further training.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Defense + Commercial Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3013669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A number of research papers has been published using the architecture of adversarial neural networks to prove that communication between two neural net based on synchronized input can be achieved, and without knowledge of this synchronized information these systems can not be breached. In this paper we will try to evaluate these adversarial neural net architectures when a third party gain access to partial secret key, or a noisy secret key, or has knowledge about loss function, or loss values itself, or activation functions used during training of encryption layers. We explore the cryptanalysis side of it in which we will focus on vulnerabilities a neural-net based cryptography network can face. This can be used in future to improve the current neural net based cryptography architectures. In this paper we show that while the encryption key is necessary to decrypt the messages in neural network domain, the adversarial neural networks can occasionally decrypt messages or raise a concern which will require further training.