Online-Codistillation Meets LARS, Going beyond the Limit of Data Parallelism in Deep Learning

Shogo Murai, Hiroaki Mikami, Masanori Koyama, Shuji Suzuki, Takuya Akiba
{"title":"Online-Codistillation Meets LARS, Going beyond the Limit of Data Parallelism in Deep Learning","authors":"Shogo Murai, Hiroaki Mikami, Masanori Koyama, Shuji Suzuki, Takuya Akiba","doi":"10.1109/DLS51937.2020.00006","DOIUrl":null,"url":null,"abstract":"Data parallel training is a powerful family of methods for the efficient training of deep neural networks on big data. Unfortunately, however, recent studies have shown that the merit of increased batch size in terms of both speed and model-performance diminishes rapidly beyond some point. This seem to apply to even LARS, the state-of-the-art large batch stochastic optimization method. In this paper, we combine LARS with online-codistillation, a recently developed, efficient deep learning algorithm built on a whole different philosophy of stabilizing the training procedure using a collaborative ensemble of models. We show that the combination of large-batch training and online-codistillation is much more efficient than either one alone. We also present a novel way of implementing the online-codistillation that can further speed up the computation. We will demonstrate the efficacy of our approach on various benchmark datasets.","PeriodicalId":185533,"journal":{"name":"2020 IEEE/ACM Fourth Workshop on Deep Learning on Supercomputers (DLS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM Fourth Workshop on Deep Learning on Supercomputers (DLS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DLS51937.2020.00006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Data parallel training is a powerful family of methods for the efficient training of deep neural networks on big data. Unfortunately, however, recent studies have shown that the merit of increased batch size in terms of both speed and model-performance diminishes rapidly beyond some point. This seem to apply to even LARS, the state-of-the-art large batch stochastic optimization method. In this paper, we combine LARS with online-codistillation, a recently developed, efficient deep learning algorithm built on a whole different philosophy of stabilizing the training procedure using a collaborative ensemble of models. We show that the combination of large-batch training and online-codistillation is much more efficient than either one alone. We also present a novel way of implementing the online-codistillation that can further speed up the computation. We will demonstrate the efficacy of our approach on various benchmark datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在线协同蒸馏与LARS相遇,超越深度学习中数据并行性的极限
数据并行训练是在大数据上有效训练深度神经网络的一种强大的方法。然而,不幸的是,最近的研究表明,在速度和模型性能方面,增加批大小的优点在超过某个点后会迅速减少。这似乎适用于LARS,最先进的大批量随机优化方法。在本文中,我们将LARS与在线协同蒸馏结合起来,在线协同蒸馏是一种最近开发的高效深度学习算法,它基于一种完全不同的理念,即使用模型的协作集成来稳定训练过程。我们的研究表明,将大批量训练和在线共蒸馏相结合比单独使用任何一种方法都要有效得多。我们还提出了一种新的实现在线共蒸馏的方法,可以进一步加快计算速度。我们将在各种基准数据集上演示我们的方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Online-Codistillation Meets LARS, Going beyond the Limit of Data Parallelism in Deep Learning Vandermonde Wave Function Ansatz for Improved Variational Monte Carlo TopiQAL: Topic-aware Question Answering using Scalable Domain-specific Supercomputers DDLBench: Towards a Scalable Benchmarking Infrastructure for Distributed Deep Learning [Copyright notice]
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1