Methods for constructing effective neural networks for low-power systems

I.S. Markov, N.V. Pivovarova
{"title":"Methods for constructing effective neural networks for low-power systems","authors":"I.S. Markov, N.V. Pivovarova","doi":"10.18127/j19997493-202102-05","DOIUrl":null,"url":null,"abstract":"Formulation of the problem. The problem of using large neural networks with complex architectures on modern devices is considered. They work well, but sometimes their speed is unacceptably low and the amount of memory required to place them on the device is not always available. Briefly describes how to solve these problems by using pruning and quantization. It is proposed to consider an unconventional type of neural networks that can meet the requirements for the occupied memory space, speed and quality of work and describe approaches to training this type of networks. The aim of the work is to describe modern approaches to reducing the size of neural networks with minimal loss of the quality of their work and to propose an alternative type of networks of small size and high accuracy. Results. The proposed type of neural network has a large number of advantages in terms of the size and flexibility of layer settings. By varying the parameters of the layers, you can control the size, speed and quality of the network. However, the greater accuracy, the greater the memory volume. To train such a small network, it is proposed to use specific techniques that allow learning complex dependencies based on a more complex and voluminous network. As a result of this learning procedure, it is assumed that only a small network is used, which can then be placed on low-power devices with a small amount of memory. Practical significance. The described methods allow the use techniques to reduce the size of networks with minimal loss of quality of their work. The proposed architecture makes it possible to train simpler networks without using their size reduction techniques. These networks can work with various data, be it pictures, text, or other information encoded in a numerical vector.","PeriodicalId":370962,"journal":{"name":"Dynamics of Complex Systems - XXI century","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Dynamics of Complex Systems - XXI century","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18127/j19997493-202102-05","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Formulation of the problem. The problem of using large neural networks with complex architectures on modern devices is considered. They work well, but sometimes their speed is unacceptably low and the amount of memory required to place them on the device is not always available. Briefly describes how to solve these problems by using pruning and quantization. It is proposed to consider an unconventional type of neural networks that can meet the requirements for the occupied memory space, speed and quality of work and describe approaches to training this type of networks. The aim of the work is to describe modern approaches to reducing the size of neural networks with minimal loss of the quality of their work and to propose an alternative type of networks of small size and high accuracy. Results. The proposed type of neural network has a large number of advantages in terms of the size and flexibility of layer settings. By varying the parameters of the layers, you can control the size, speed and quality of the network. However, the greater accuracy, the greater the memory volume. To train such a small network, it is proposed to use specific techniques that allow learning complex dependencies based on a more complex and voluminous network. As a result of this learning procedure, it is assumed that only a small network is used, which can then be placed on low-power devices with a small amount of memory. Practical significance. The described methods allow the use techniques to reduce the size of networks with minimal loss of quality of their work. The proposed architecture makes it possible to train simpler networks without using their size reduction techniques. These networks can work with various data, be it pictures, text, or other information encoded in a numerical vector.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
小功率系统中有效神经网络的构建方法
问题的表述。研究了在现代设备上使用具有复杂结构的大型神经网络的问题。它们工作得很好,但有时它们的速度低得令人无法接受,并且将它们放置在设备上所需的内存量并不总是可用的。简要介绍了如何利用剪枝和量化来解决这些问题。提出了一种能够满足所占用的内存空间、速度和工作质量要求的非常规类型的神经网络,并描述了训练这类网络的方法。这项工作的目的是描述以最小的工作质量损失来减少神经网络大小的现代方法,并提出一种小尺寸和高精度的替代类型的网络。结果。这种类型的神经网络在层设置的大小和灵活性方面有很多优点。通过改变各层的参数,你可以控制网络的大小、速度和质量。但是,准确度越高,内存容量就越大。为了训练如此小的网络,建议使用特定的技术,允许基于更复杂和体积更大的网络学习复杂的依赖关系。作为这个学习过程的结果,假设只使用一个小网络,然后可以将其放置在具有少量内存的低功耗设备上。现实意义。所描述的方法允许使用技术来减少网络的规模,同时最小化其工作质量的损失。所提出的架构使得训练更简单的网络成为可能,而不需要使用它们的大小缩减技术。这些网络可以处理各种数据,无论是图片、文本还是用数字向量编码的其他信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Balancing of metal resonators of wave solid-state gyroscopes of general use Intelligent system for collecting and analyzing information about the actions of users of an automated enterprise management system Analysis of the features of the protection system of the domestic special-purpose operating system Analyzing common approaches of accident models for risk management in socio-technical systems Hybrid approaches of accident modeling for risk management in socio-technical systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1