Jiashu ZhangYiming, Zihan PanYiming, MollyYiming, Xu, Khuzaima Daudjee, Sihang Liu
{"title":"FreeRide:在管道并行中收获气泡","authors":"Jiashu ZhangYiming, Zihan PanYiming, MollyYiming, Xu, Khuzaima Daudjee, Sihang Liu","doi":"arxiv-2409.06941","DOIUrl":null,"url":null,"abstract":"The occurrence of bubbles in pipeline parallelism is an inherent limitation\nthat can account for more than 40% of the large language model (LLM) training\ntime and is one of the main reasons for the underutilization of GPU resources\nin LLM training. Harvesting these bubbles for GPU side tasks can increase\nresource utilization and reduce training costs but comes with challenges.\nFirst, because bubbles are discontinuous with various shapes, programming side\ntasks becomes difficult while requiring excessive engineering effort. Second, a\nside task can compete with pipeline training for GPU resources and incur\nsignificant overhead. To address these challenges, we propose FreeRide, a\nsystem designed to harvest bubbles in pipeline parallelism for side tasks.\nFreeRide provides programmers with interfaces to implement side tasks easily,\nmanages bubbles and side tasks during pipeline training, and controls access to\nGPU resources by side tasks to reduce overhead. We demonstrate that FreeRide\nachieves 7.8% average cost savings with a negligible overhead of about 1% in\ntraining LLMs while serving model training, graph analytics, and image\nprocessing side tasks.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FreeRide: Harvesting Bubbles in Pipeline Parallelism\",\"authors\":\"Jiashu ZhangYiming, Zihan PanYiming, MollyYiming, Xu, Khuzaima Daudjee, Sihang Liu\",\"doi\":\"arxiv-2409.06941\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The occurrence of bubbles in pipeline parallelism is an inherent limitation\\nthat can account for more than 40% of the large language model (LLM) training\\ntime and is one of the main reasons for the underutilization of GPU resources\\nin LLM training. Harvesting these bubbles for GPU side tasks can increase\\nresource utilization and reduce training costs but comes with challenges.\\nFirst, because bubbles are discontinuous with various shapes, programming side\\ntasks becomes difficult while requiring excessive engineering effort. Second, a\\nside task can compete with pipeline training for GPU resources and incur\\nsignificant overhead. To address these challenges, we propose FreeRide, a\\nsystem designed to harvest bubbles in pipeline parallelism for side tasks.\\nFreeRide provides programmers with interfaces to implement side tasks easily,\\nmanages bubbles and side tasks during pipeline training, and controls access to\\nGPU resources by side tasks to reduce overhead. We demonstrate that FreeRide\\nachieves 7.8% average cost savings with a negligible overhead of about 1% in\\ntraining LLMs while serving model training, graph analytics, and image\\nprocessing side tasks.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06941\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06941","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FreeRide: Harvesting Bubbles in Pipeline Parallelism
The occurrence of bubbles in pipeline parallelism is an inherent limitation
that can account for more than 40% of the large language model (LLM) training
time and is one of the main reasons for the underutilization of GPU resources
in LLM training. Harvesting these bubbles for GPU side tasks can increase
resource utilization and reduce training costs but comes with challenges.
First, because bubbles are discontinuous with various shapes, programming side
tasks becomes difficult while requiring excessive engineering effort. Second, a
side task can compete with pipeline training for GPU resources and incur
significant overhead. To address these challenges, we propose FreeRide, a
system designed to harvest bubbles in pipeline parallelism for side tasks.
FreeRide provides programmers with interfaces to implement side tasks easily,
manages bubbles and side tasks during pipeline training, and controls access to
GPU resources by side tasks to reduce overhead. We demonstrate that FreeRide
achieves 7.8% average cost savings with a negligible overhead of about 1% in
training LLMs while serving model training, graph analytics, and image
processing side tasks.