WallFacer: Guiding Transformer Model Training Out of the Long-Context Dark Forest with N-body Problem

Ziming Liu, Shaoyu Wang, Shenggan Cheng, Zhongkai Zhao, Yang Bai, Xuanlei Zhao, James Demmel, Yang You
{"title":"WallFacer: Guiding Transformer Model Training Out of the Long-Context Dark Forest with N-body Problem","authors":"Ziming Liu, Shaoyu Wang, Shenggan Cheng, Zhongkai Zhao, Yang Bai, Xuanlei Zhao, James Demmel, Yang You","doi":"arxiv-2407.00611","DOIUrl":null,"url":null,"abstract":"In recent years, Transformer-based Large Language Models (LLMs) have garnered\nsignificant attention due to their exceptional performance across a variety of\ntasks. However, training these models on long sequences presents a substantial\nchallenge in terms of efficiency and scalability. Current methods are\nconstrained either by the number of attention heads, limiting scalability, or\nby excessive communication overheads. In this paper, we propose an insight that\nAttention Computation can be considered as a special case of n-body problem\nwith direct interactions. Based on this concept, this paper introduces\nWallFacer, an efficient long-sequence training system with a novel\nmulti-dimensional ring sequence parallelism, fostering an efficient\ncommunication paradigm and extra tuning space for communication arrangement.\nThrough comprehensive experiments under diverse environments and model\nsettings, we demonstrate that WallFacer significantly surpasses\nstate-of-the-art method that supports near-infinite sequence length, achieving\nperformance improvements of up to 77.12%.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"133 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.00611","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, Transformer-based Large Language Models (LLMs) have garnered significant attention due to their exceptional performance across a variety of tasks. However, training these models on long sequences presents a substantial challenge in terms of efficiency and scalability. Current methods are constrained either by the number of attention heads, limiting scalability, or by excessive communication overheads. In this paper, we propose an insight that Attention Computation can be considered as a special case of n-body problem with direct interactions. Based on this concept, this paper introduces WallFacer, an efficient long-sequence training system with a novel multi-dimensional ring sequence parallelism, fostering an efficient communication paradigm and extra tuning space for communication arrangement. Through comprehensive experiments under diverse environments and model settings, we demonstrate that WallFacer significantly surpasses state-of-the-art method that supports near-infinite sequence length, achieving performance improvements of up to 77.12%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
WallFacer:引导变压器模型训练走出 N 体问题的长语境黑暗森林
近年来,基于变换器的大型语言模型(LLM)因其在各种任务中的卓越性能而备受关注。然而,在长序列上训练这些模型在效率和可扩展性方面面临巨大挑战。目前的方法要么受制于注意力头的数量,限制了可扩展性,要么受制于过多的通信开销。在本文中,我们提出了一种见解,即注意力计算可视为具有直接相互作用的 n-body 问题的一种特例。通过在不同环境和模型设置下的综合实验,我们证明了 WallFacer 显著超越了支持近无限序列长度的最先进方法,性能提升高达 77.12%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively parallel CMA-ES with increasing population Communication Lower Bounds and Optimal Algorithms for Symmetric Matrix Computations Energy Efficiency Support for Software Defined Networks: a Serverless Computing Approach CountChain: A Decentralized Oracle Network for Counting Systems Delay Analysis of EIP-4844
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1