Egor Manuylovich, Diego Argüello Ron, Morteza Kamalian-Kopae, Sergei K. Turitsyn
{"title":"使用随机共振神经元的鲁棒神经网络。","authors":"Egor Manuylovich, Diego Argüello Ron, Morteza Kamalian-Kopae, Sergei K. Turitsyn","doi":"10.1038/s44172-024-00314-0","DOIUrl":null,"url":null,"abstract":"Various successful applications of deep artificial neural networks are effectively facilitated by the possibility to increase the number of layers and neurons in the network at the expense of the growing computational complexity. Increasing computational complexity to improve performance makes hardware implementation more difficult and directly affects both power consumption and the accumulation of signal processing latency, which are critical issues in many applications. Power consumption can be potentially reduced using analog neural networks, the performance of which, however, is limited by noise aggregation. Following the idea of physics-inspired machine learning, we propose here a type of neural network using stochastic resonances as a dynamic nonlinear node and demonstrate the possibility of considerably reducing the number of neurons required for a given prediction accuracy. We also observe that the performance of such neural networks is more robust against the impact of noise in the training data compared to conventional networks. Manuylovich and colleagues propose the use of stochastic resonances in neural networks as dynamic nonlinear nodes. They demonstrate the possibility of reducing the number of neurons for a given prediction accuracy and observe that the performance of such neural networks can be more robust against the impact of noise in the training data compared to the conventional networks.","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":" ","pages":"1-7"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44172-024-00314-0.pdf","citationCount":"0","resultStr":"{\"title\":\"Robust neural networks using stochastic resonance neurons\",\"authors\":\"Egor Manuylovich, Diego Argüello Ron, Morteza Kamalian-Kopae, Sergei K. Turitsyn\",\"doi\":\"10.1038/s44172-024-00314-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Various successful applications of deep artificial neural networks are effectively facilitated by the possibility to increase the number of layers and neurons in the network at the expense of the growing computational complexity. Increasing computational complexity to improve performance makes hardware implementation more difficult and directly affects both power consumption and the accumulation of signal processing latency, which are critical issues in many applications. Power consumption can be potentially reduced using analog neural networks, the performance of which, however, is limited by noise aggregation. Following the idea of physics-inspired machine learning, we propose here a type of neural network using stochastic resonances as a dynamic nonlinear node and demonstrate the possibility of considerably reducing the number of neurons required for a given prediction accuracy. We also observe that the performance of such neural networks is more robust against the impact of noise in the training data compared to conventional networks. Manuylovich and colleagues propose the use of stochastic resonances in neural networks as dynamic nonlinear nodes. They demonstrate the possibility of reducing the number of neurons for a given prediction accuracy and observe that the performance of such neural networks can be more robust against the impact of noise in the training data compared to the conventional networks.\",\"PeriodicalId\":72644,\"journal\":{\"name\":\"Communications engineering\",\"volume\":\" \",\"pages\":\"1-7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s44172-024-00314-0.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s44172-024-00314-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s44172-024-00314-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust neural networks using stochastic resonance neurons
Various successful applications of deep artificial neural networks are effectively facilitated by the possibility to increase the number of layers and neurons in the network at the expense of the growing computational complexity. Increasing computational complexity to improve performance makes hardware implementation more difficult and directly affects both power consumption and the accumulation of signal processing latency, which are critical issues in many applications. Power consumption can be potentially reduced using analog neural networks, the performance of which, however, is limited by noise aggregation. Following the idea of physics-inspired machine learning, we propose here a type of neural network using stochastic resonances as a dynamic nonlinear node and demonstrate the possibility of considerably reducing the number of neurons required for a given prediction accuracy. We also observe that the performance of such neural networks is more robust against the impact of noise in the training data compared to conventional networks. Manuylovich and colleagues propose the use of stochastic resonances in neural networks as dynamic nonlinear nodes. They demonstrate the possibility of reducing the number of neurons for a given prediction accuracy and observe that the performance of such neural networks can be more robust against the impact of noise in the training data compared to the conventional networks.