以并行和异步方式训练神经网络的神经分布式认知自适应优化

IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Integrated Computer-Aided Engineering Pub Date : 2023-08-06 DOI:10.3233/ica-230718
P. Michailidis, Iakovos T. Michailidis, Sokratis Gkelios, Georgios D. Karatzinis, Elias B. Kosmatopoulos
{"title":"以并行和异步方式训练神经网络的神经分布式认知自适应优化","authors":"P. Michailidis, Iakovos T. Michailidis, Sokratis Gkelios, Georgios D. Karatzinis, Elias B. Kosmatopoulos","doi":"10.3233/ica-230718","DOIUrl":null,"url":null,"abstract":"Distributed Machine learning has delivered considerable advances in training neural networks by leveraging parallel processing, scalability, and fault tolerance to accelerate the process and improve model performance. However, training of large-size models has exhibited numerous challenges, due to the gradient dependence that conventional approaches integrate. To improve the training efficiency of such models, gradient-free distributed methodologies have emerged fostering the gradient-independent parallel processing and efficient utilization of resources across multiple devices or nodes. However, such approaches, are usually restricted to specific applications, due to their conceptual limitations: computational and communicational requirements between partitions, limited partitioning solely into layers, limited sequential learning between the different layers, as well as training a potential model in solely synchronous mode. In this paper, we propose and evaluate, the Neuro-Distributed Cognitive Adaptive Optimization (ND-CAO) methodology, a novel gradient-free algorithm that enables the efficient distributed training of arbitrary types of neural networks, in both synchronous and asynchronous manner. Contrary to the majority of existing methodologies, ND-CAO is applicable to any possible splitting of a potential neural network, into blocks (partitions), with each of the blocks allowed to update its parameters fully asynchronously and independently of the rest of the blocks. Most importantly, no data exchange is required between the different blocks during training with the only information each block requires is the global performance of the model. Convergence of ND-CAO is mathematically established for generic neural network architectures, independently of the particular choices made, while four comprehensive experimental cases, considering different model architectures and image classification tasks, validate the algorithms’ robustness and effectiveness in both synchronous and asynchronous training modes. Moreover, by conducting a thorough comparison between synchronous and asynchronous ND-CAO training, the algorithm is identified as an efficient scheme to train neural networks in a novel gradient-independent, distributed, and asynchronous manner, delivering similar – or even improved results in Loss and Accuracy measures.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"1 1","pages":""},"PeriodicalIF":5.8000,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neuro-distributed cognitive adaptive optimization for training neural networks in a parallel and asynchronous manner\",\"authors\":\"P. Michailidis, Iakovos T. Michailidis, Sokratis Gkelios, Georgios D. Karatzinis, Elias B. Kosmatopoulos\",\"doi\":\"10.3233/ica-230718\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Distributed Machine learning has delivered considerable advances in training neural networks by leveraging parallel processing, scalability, and fault tolerance to accelerate the process and improve model performance. However, training of large-size models has exhibited numerous challenges, due to the gradient dependence that conventional approaches integrate. To improve the training efficiency of such models, gradient-free distributed methodologies have emerged fostering the gradient-independent parallel processing and efficient utilization of resources across multiple devices or nodes. However, such approaches, are usually restricted to specific applications, due to their conceptual limitations: computational and communicational requirements between partitions, limited partitioning solely into layers, limited sequential learning between the different layers, as well as training a potential model in solely synchronous mode. In this paper, we propose and evaluate, the Neuro-Distributed Cognitive Adaptive Optimization (ND-CAO) methodology, a novel gradient-free algorithm that enables the efficient distributed training of arbitrary types of neural networks, in both synchronous and asynchronous manner. Contrary to the majority of existing methodologies, ND-CAO is applicable to any possible splitting of a potential neural network, into blocks (partitions), with each of the blocks allowed to update its parameters fully asynchronously and independently of the rest of the blocks. Most importantly, no data exchange is required between the different blocks during training with the only information each block requires is the global performance of the model. Convergence of ND-CAO is mathematically established for generic neural network architectures, independently of the particular choices made, while four comprehensive experimental cases, considering different model architectures and image classification tasks, validate the algorithms’ robustness and effectiveness in both synchronous and asynchronous training modes. Moreover, by conducting a thorough comparison between synchronous and asynchronous ND-CAO training, the algorithm is identified as an efficient scheme to train neural networks in a novel gradient-independent, distributed, and asynchronous manner, delivering similar – or even improved results in Loss and Accuracy measures.\",\"PeriodicalId\":50358,\"journal\":{\"name\":\"Integrated Computer-Aided Engineering\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2023-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Integrated Computer-Aided Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.3233/ica-230718\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integrated Computer-Aided Engineering","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/ica-230718","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

分布式机器学习通过利用并行处理、可扩展性和容错性来加速过程并提高模型性能,在训练神经网络方面取得了相当大的进步。然而,由于传统方法集成的梯度依赖,大尺寸模型的训练显示出许多挑战。为了提高这些模型的训练效率,无梯度分布式方法的出现促进了梯度无关的并行处理和跨多个设备或节点资源的有效利用。然而,由于概念上的限制,这些方法通常仅限于特定的应用:分区之间的计算和通信需求,有限的分层划分,不同层之间有限的顺序学习,以及在完全同步模式下训练潜在模型。在本文中,我们提出并评估了神经分布式认知自适应优化(ND-CAO)方法,这是一种新颖的无梯度算法,能够以同步和异步方式对任意类型的神经网络进行有效的分布式训练。与大多数现有方法相反,ND-CAO适用于任何可能将潜在神经网络分割成块(分区),每个块允许完全异步更新其参数,并且独立于其他块。最重要的是,在训练过程中,不同块之间不需要数据交换,每个块需要的唯一信息是模型的全局性能。在数学上建立了ND-CAO在通用神经网络架构下的收敛性,而不受具体选择的影响。同时,考虑到不同的模型架构和图像分类任务,四个综合实验案例验证了算法在同步和异步训练模式下的鲁棒性和有效性。此外,通过对同步和异步ND-CAO训练进行全面比较,该算法被确定为一种有效的方案,以一种新颖的梯度无关、分布式和异步方式训练神经网络,在Loss和Accuracy度量方面提供相似甚至改进的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Neuro-distributed cognitive adaptive optimization for training neural networks in a parallel and asynchronous manner
Distributed Machine learning has delivered considerable advances in training neural networks by leveraging parallel processing, scalability, and fault tolerance to accelerate the process and improve model performance. However, training of large-size models has exhibited numerous challenges, due to the gradient dependence that conventional approaches integrate. To improve the training efficiency of such models, gradient-free distributed methodologies have emerged fostering the gradient-independent parallel processing and efficient utilization of resources across multiple devices or nodes. However, such approaches, are usually restricted to specific applications, due to their conceptual limitations: computational and communicational requirements between partitions, limited partitioning solely into layers, limited sequential learning between the different layers, as well as training a potential model in solely synchronous mode. In this paper, we propose and evaluate, the Neuro-Distributed Cognitive Adaptive Optimization (ND-CAO) methodology, a novel gradient-free algorithm that enables the efficient distributed training of arbitrary types of neural networks, in both synchronous and asynchronous manner. Contrary to the majority of existing methodologies, ND-CAO is applicable to any possible splitting of a potential neural network, into blocks (partitions), with each of the blocks allowed to update its parameters fully asynchronously and independently of the rest of the blocks. Most importantly, no data exchange is required between the different blocks during training with the only information each block requires is the global performance of the model. Convergence of ND-CAO is mathematically established for generic neural network architectures, independently of the particular choices made, while four comprehensive experimental cases, considering different model architectures and image classification tasks, validate the algorithms’ robustness and effectiveness in both synchronous and asynchronous training modes. Moreover, by conducting a thorough comparison between synchronous and asynchronous ND-CAO training, the algorithm is identified as an efficient scheme to train neural networks in a novel gradient-independent, distributed, and asynchronous manner, delivering similar – or even improved results in Loss and Accuracy measures.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Integrated Computer-Aided Engineering
Integrated Computer-Aided Engineering 工程技术-工程:综合
CiteScore
9.90
自引率
21.50%
发文量
21
审稿时长
>12 weeks
期刊介绍: Integrated Computer-Aided Engineering (ICAE) was founded in 1993. "Based on the premise that interdisciplinary thinking and synergistic collaboration of disciplines can solve complex problems, open new frontiers, and lead to true innovations and breakthroughs, the cornerstone of industrial competitiveness and advancement of the society" as noted in the inaugural issue of the journal. The focus of ICAE is the integration of leading edge and emerging computer and information technologies for innovative solution of engineering problems. The journal fosters interdisciplinary research and presents a unique forum for innovative computer-aided engineering. It also publishes novel industrial applications of CAE, thus helping to bring new computational paradigms from research labs and classrooms to reality. Areas covered by the journal include (but are not limited to) artificial intelligence, advanced signal processing, biologically inspired computing, cognitive modeling, concurrent engineering, database management, distributed computing, evolutionary computing, fuzzy logic, genetic algorithms, geometric modeling, intelligent and adaptive systems, internet-based technologies, knowledge discovery and engineering, machine learning, mechatronics, mobile computing, multimedia technologies, networking, neural network computing, object-oriented systems, optimization and search, parallel processing, robotics virtual reality, and visualization techniques.
期刊最新文献
A parametric and feature-based CAD dataset to support human-computer interaction for advanced 3D shape learning A high-level simulator for Network-on-Chip Efficient surface defect detection in industrial screen printing with minimized labeling effort Effectiveness of deep learning techniques in TV programs classification: A comparative analysis Railway alignment optimization in regions with densely-distributed obstacles based on semantic topological maps
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1