Furion:减轻单机上深度学习框架的开销(正在开发中)

L. Jin, Chao Wang, Lei Gong, Chongchong Xu, Yahui Hu, Luchao Tan, Xuehai Zhou
{"title":"Furion:减轻单机上深度学习框架的开销(正在开发中)","authors":"L. Jin, Chao Wang, Lei Gong, Chongchong Xu, Yahui Hu, Luchao Tan, Xuehai Zhou","doi":"10.5555/3283568.3283582","DOIUrl":null,"url":null,"abstract":"Deep learning has been successful at solving many kinds of tasks. Hardware accelerators with high performance and parallelism have become mainstream to implement deep neural networks. In order to increase hardware utilization, multiple applications will share the same compute resource. However, different applications may use different deep learning frameworks and occupy different amounts of resources. If there are no scheduling platforms that are compatible with different frameworks, resources competition will result in longer response time, run out of memory, and other errors. When the resources of the system cannot satisfy all the applications at the same time, application switching overhead will be excessive without reasonable resource management strategy.In this paper, we propose Furion - a middleware alleviates overheads for deep learning framework on a single machine. Furion schedules tasks, overlaps the execution of different computing resource, and batches unknown inputs to increase the hardware accelerator utilization. It dynamically manages memory usage for each application to alleviate the overhead of application switching and make a complex model enable implement in a low-end GPU. Our experiment proved that Furion achieves 2.2x-2.7x speedup on the GTX1060.","PeriodicalId":300268,"journal":{"name":"International Conference on Hardware/Software Codesign and System Synthesis","volume":"166 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Furion: alleviating overheads for deep learning framework on single machine (work-in-progress)\",\"authors\":\"L. Jin, Chao Wang, Lei Gong, Chongchong Xu, Yahui Hu, Luchao Tan, Xuehai Zhou\",\"doi\":\"10.5555/3283568.3283582\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning has been successful at solving many kinds of tasks. Hardware accelerators with high performance and parallelism have become mainstream to implement deep neural networks. In order to increase hardware utilization, multiple applications will share the same compute resource. However, different applications may use different deep learning frameworks and occupy different amounts of resources. If there are no scheduling platforms that are compatible with different frameworks, resources competition will result in longer response time, run out of memory, and other errors. When the resources of the system cannot satisfy all the applications at the same time, application switching overhead will be excessive without reasonable resource management strategy.In this paper, we propose Furion - a middleware alleviates overheads for deep learning framework on a single machine. Furion schedules tasks, overlaps the execution of different computing resource, and batches unknown inputs to increase the hardware accelerator utilization. It dynamically manages memory usage for each application to alleviate the overhead of application switching and make a complex model enable implement in a low-end GPU. Our experiment proved that Furion achieves 2.2x-2.7x speedup on the GTX1060.\",\"PeriodicalId\":300268,\"journal\":{\"name\":\"International Conference on Hardware/Software Codesign and System Synthesis\",\"volume\":\"166 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Hardware/Software Codesign and System Synthesis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5555/3283568.3283582\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Hardware/Software Codesign and System Synthesis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5555/3283568.3283582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习已经成功地解决了许多类型的任务。具有高性能和并行性的硬件加速器已经成为实现深度神经网络的主流。为了提高硬件利用率,多个应用程序将共享相同的计算资源。然而,不同的应用程序可能使用不同的深度学习框架并占用不同数量的资源。如果没有与不同框架兼容的调度平台,资源竞争将导致更长的响应时间、内存耗尽和其他错误。当系统的资源不能同时满足所有应用的需求时,如果没有合理的资源管理策略,应用的切换开销就会过大。在本文中,我们提出了一种中间件Furion,它可以减轻单机上深度学习框架的开销。Furion调度任务,重叠不同计算资源的执行,并批量处理未知输入,以提高硬件加速器的利用率。它动态地管理每个应用程序的内存使用,以减轻应用程序切换的开销,并使复杂的模型能够在低端GPU中实现。我们的实验证明,Furion在GTX1060上实现了2.2 -2.7倍的加速。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Furion: alleviating overheads for deep learning framework on single machine (work-in-progress)
Deep learning has been successful at solving many kinds of tasks. Hardware accelerators with high performance and parallelism have become mainstream to implement deep neural networks. In order to increase hardware utilization, multiple applications will share the same compute resource. However, different applications may use different deep learning frameworks and occupy different amounts of resources. If there are no scheduling platforms that are compatible with different frameworks, resources competition will result in longer response time, run out of memory, and other errors. When the resources of the system cannot satisfy all the applications at the same time, application switching overhead will be excessive without reasonable resource management strategy.In this paper, we propose Furion - a middleware alleviates overheads for deep learning framework on a single machine. Furion schedules tasks, overlaps the execution of different computing resource, and batches unknown inputs to increase the hardware accelerator utilization. It dynamically manages memory usage for each application to alleviate the overhead of application switching and make a complex model enable implement in a low-end GPU. Our experiment proved that Furion achieves 2.2x-2.7x speedup on the GTX1060.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Furion: alleviating overheads for deep learning framework on single machine (work-in-progress) A chip-level security framework for assessing sensor data integrity: work-in-progress Dynamic data management for automotive ECUs with hybrid RAM-NVM memory: work-in-progress An on-chip interconnect and protocol stack for multiple communication paradigms and programming models Efficient dynamic voltage/frequency scaling through algorithmic loop transformation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1