{"title":"基于随机搜索的神经网络训练","authors":"V. Matskevich","doi":"10.52928/2070-1624-2022-39-11-21-29","DOIUrl":null,"url":null,"abstract":"The paper deals with a state-of-art problem, associated with neural networks training. Training algorithm (with special parallelization procedure) implementing the annealing method is proposed. The training efficiency is demonstrated by the example of a neural network architecture focused on parallel data processing. For the color image compression problem, it is shown that the proposed algorithm significantly outperforms gradient \nmethods in terms of efficiency. The results obtained make it possible to improve the neural networks training quality in general, and can be used to solve a wide class of applied problems.","PeriodicalId":386243,"journal":{"name":"HERALD OF POLOTSK STATE UNIVERSITY. Series С FUNDAMENTAL SCIENCES","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"NEURAL NETWORKS TRAINING BASED ON RANDOM SEARCH\",\"authors\":\"V. Matskevich\",\"doi\":\"10.52928/2070-1624-2022-39-11-21-29\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper deals with a state-of-art problem, associated with neural networks training. Training algorithm (with special parallelization procedure) implementing the annealing method is proposed. The training efficiency is demonstrated by the example of a neural network architecture focused on parallel data processing. For the color image compression problem, it is shown that the proposed algorithm significantly outperforms gradient \\nmethods in terms of efficiency. The results obtained make it possible to improve the neural networks training quality in general, and can be used to solve a wide class of applied problems.\",\"PeriodicalId\":386243,\"journal\":{\"name\":\"HERALD OF POLOTSK STATE UNIVERSITY. Series С FUNDAMENTAL SCIENCES\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"HERALD OF POLOTSK STATE UNIVERSITY. Series С FUNDAMENTAL SCIENCES\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.52928/2070-1624-2022-39-11-21-29\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"HERALD OF POLOTSK STATE UNIVERSITY. Series С FUNDAMENTAL SCIENCES","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52928/2070-1624-2022-39-11-21-29","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The paper deals with a state-of-art problem, associated with neural networks training. Training algorithm (with special parallelization procedure) implementing the annealing method is proposed. The training efficiency is demonstrated by the example of a neural network architecture focused on parallel data processing. For the color image compression problem, it is shown that the proposed algorithm significantly outperforms gradient
methods in terms of efficiency. The results obtained make it possible to improve the neural networks training quality in general, and can be used to solve a wide class of applied problems.