Jinghan Yao, Sam Ade Jacobs, Masahiro Tanaka, Olatunji Ruwase, Aamir Shafi, Hari Subramoni, Dhabaleswar K. Panda
{"title":"Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer","authors":"Jinghan Yao, Sam Ade Jacobs, Masahiro Tanaka, Olatunji Ruwase, Aamir Shafi, Hari Subramoni, Dhabaleswar K. Panda","doi":"arxiv-2408.16978","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) with long context capabilities are integral to\ncomplex tasks in natural language processing and computational biology, such as\ntext generation and protein sequence analysis. However, training LLMs directly\non extremely long contexts demands considerable GPU resources and increased\nmemory, leading to higher costs and greater complexity. Alternative approaches\nthat introduce long context capabilities via downstream finetuning or\nadaptations impose significant design limitations. In this paper, we propose\nFully Pipelined Distributed Transformer (FPDT) for efficiently training\nlong-context LLMs with extreme hardware efficiency. For GPT and Llama models,\nwe achieve a 16x increase in sequence length that can be trained on the same\nhardware compared to current state-of-the-art solutions. With our dedicated\nsequence chunk pipeline design, we can now train 8B LLM with 2 million sequence\nlength on only 4 GPUs, while also maintaining over 55% of MFU. Our proposed\nFPDT is agnostic to existing training techniques and is proven to work\nefficiently across different LLM models.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"14 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.16978","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) with long context capabilities are integral to
complex tasks in natural language processing and computational biology, such as
text generation and protein sequence analysis. However, training LLMs directly
on extremely long contexts demands considerable GPU resources and increased
memory, leading to higher costs and greater complexity. Alternative approaches
that introduce long context capabilities via downstream finetuning or
adaptations impose significant design limitations. In this paper, we propose
Fully Pipelined Distributed Transformer (FPDT) for efficiently training
long-context LLMs with extreme hardware efficiency. For GPT and Llama models,
we achieve a 16x increase in sequence length that can be trained on the same
hardware compared to current state-of-the-art solutions. With our dedicated
sequence chunk pipeline design, we can now train 8B LLM with 2 million sequence
length on only 4 GPUs, while also maintaining over 55% of MFU. Our proposed
FPDT is agnostic to existing training techniques and is proven to work
efficiently across different LLM models.