Fixed-Point Optimization of Transformer Neural Network

Yoonho Boo, Wonyong Sung
{"title":"Fixed-Point Optimization of Transformer Neural Network","authors":"Yoonho Boo, Wonyong Sung","doi":"10.1109/ICASSP40776.2020.9054724","DOIUrl":null,"url":null,"abstract":"The Transformer model adopts a self-attention structure and shows very good performance in various natural language processing tasks. However, it is difficult to implement the Transformer in embedded systems because of its very large model size. In this study, we quantize the parameters and hidden signals of the Transformer for complexity reduction. Not only matrices for weights and embedding but the input and the softmax outputs are also quantized to utilize low-precision matrix multiplication. The fixed-point optimization steps consist of quantization sensitivity analysis, hardware conscious word-length assignment, quantization and retraining, and post-training for improved generalization. We achieved 27.51 BLEU score on the WMT English-to-German translation task with 4-bit weights and 6-bit hidden signals.","PeriodicalId":13127,"journal":{"name":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"85 1","pages":"1753-1757"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP40776.2020.9054724","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

The Transformer model adopts a self-attention structure and shows very good performance in various natural language processing tasks. However, it is difficult to implement the Transformer in embedded systems because of its very large model size. In this study, we quantize the parameters and hidden signals of the Transformer for complexity reduction. Not only matrices for weights and embedding but the input and the softmax outputs are also quantized to utilize low-precision matrix multiplication. The fixed-point optimization steps consist of quantization sensitivity analysis, hardware conscious word-length assignment, quantization and retraining, and post-training for improved generalization. We achieved 27.51 BLEU score on the WMT English-to-German translation task with 4-bit weights and 6-bit hidden signals.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
变压器神经网络的不动点优化
Transformer模型采用自注意结构,在各种自然语言处理任务中表现出很好的性能。然而,由于其非常大的模型尺寸,很难在嵌入式系统中实现Transformer。在本研究中,我们量化了变压器的参数和隐藏信号,以降低复杂性。不仅用于权重和嵌入的矩阵,而且输入和softmax输出也被量化以利用低精度的矩阵乘法。不动点优化步骤包括量化敏感性分析、硬件有意识的词长分配、量化和再训练以及提高泛化的后训练。我们在4位权值和6位隐藏信号的WMT英语-德语翻译任务上获得了27.51 BLEU分数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Theoretical Analysis of Multi-Carrier Agile Phased Array Radar Paco and Paco-Dct: Patch Consensus and Its Application To Inpainting Array-Geometry-Aware Spatial Active Noise Control Based on Direction-of-Arrival Weighting Neural Network Wiretap Code Design for Multi-Mode Fiber Optical Channels Distributed Non-Orthogonal Pilot Design for Multi-Cell Massive Mimo Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1