DRAGON2-CB的压缩SIMD矢量化

Riadh Ben Abdelhamid, Y. Yamaguchi
{"title":"DRAGON2-CB的压缩SIMD矢量化","authors":"Riadh Ben Abdelhamid, Y. Yamaguchi","doi":"10.1109/MCSoC57363.2022.00023","DOIUrl":null,"url":null,"abstract":"For over a half-century, computer architects have explored micro-architecture, instruction set architecture, and system architecture to offer a significant performance boost out of a computing chip. In the micro-architecture, multi-processing and multi-threading arose as fusing highly parallel processing and the growth of semiconductor manufacturing technology. It has caused a paradigm shift in computing chips and led to the many-core processor age, such as NVIDIA GPUs, Movidius Myriad, PEZY ZettaScaler, and the project Eyeriss based on a reconfigurable accelerator. Wherein packed SIMD (Single Instruction Multiple Data) vectorizations attract attention, especially from ML (machine learning) applications. It can achieve more energy-efficient computing by reducing computing precision, which is enough for ML applications to obtain the results with low-accuracy calculations. In other words, accuracy-flexible computing needs to allow splitting off one N-bit ALU (Arithmetic Logic Unit) or one N-bit FPU (Floating-Point Unit) into multiple $M$-bit units. For example, a double-precision (64-bit operands width) FPU can be split into two single-precision (32-bit operands width) FPUs, or four half-precision (16-bit operands width) FPUs. Consequently, instead of executing one original operation, a packed SIMD vectorization simultaneously enables executing two or four reduced-precision operations. This article proposes a packed SIMD vectorization approach, which considers the Dynamically Reprogrammable Architecture of Gather-scatter Overlay Nodes-Compact Buffering (DRAGON2-CB) many-core overlay architecture. In particular, this article presents a thorough comparative study between packed SIMD using dual single-precision and quad half-precision FPU-only many-core overlays compared to the non-vectorized double-precision version.","PeriodicalId":150801,"journal":{"name":"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Packed SIMD Vectorization of the DRAGON2-CB\",\"authors\":\"Riadh Ben Abdelhamid, Y. Yamaguchi\",\"doi\":\"10.1109/MCSoC57363.2022.00023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For over a half-century, computer architects have explored micro-architecture, instruction set architecture, and system architecture to offer a significant performance boost out of a computing chip. In the micro-architecture, multi-processing and multi-threading arose as fusing highly parallel processing and the growth of semiconductor manufacturing technology. It has caused a paradigm shift in computing chips and led to the many-core processor age, such as NVIDIA GPUs, Movidius Myriad, PEZY ZettaScaler, and the project Eyeriss based on a reconfigurable accelerator. Wherein packed SIMD (Single Instruction Multiple Data) vectorizations attract attention, especially from ML (machine learning) applications. It can achieve more energy-efficient computing by reducing computing precision, which is enough for ML applications to obtain the results with low-accuracy calculations. In other words, accuracy-flexible computing needs to allow splitting off one N-bit ALU (Arithmetic Logic Unit) or one N-bit FPU (Floating-Point Unit) into multiple $M$-bit units. For example, a double-precision (64-bit operands width) FPU can be split into two single-precision (32-bit operands width) FPUs, or four half-precision (16-bit operands width) FPUs. Consequently, instead of executing one original operation, a packed SIMD vectorization simultaneously enables executing two or four reduced-precision operations. This article proposes a packed SIMD vectorization approach, which considers the Dynamically Reprogrammable Architecture of Gather-scatter Overlay Nodes-Compact Buffering (DRAGON2-CB) many-core overlay architecture. In particular, this article presents a thorough comparative study between packed SIMD using dual single-precision and quad half-precision FPU-only many-core overlays compared to the non-vectorized double-precision version.\",\"PeriodicalId\":150801,\"journal\":{\"name\":\"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MCSoC57363.2022.00023\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCSoC57363.2022.00023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

半个多世纪以来,计算机架构师一直在探索微体系结构、指令集体系结构和系统体系结构,以便从计算芯片中获得显著的性能提升。在微体系结构中,随着高度并行处理和半导体制造技术的发展,多处理和多线程技术应运而生。它引起了计算芯片的范式转变,并导致了多核处理器时代,如NVIDIA gpu、Movidius Myriad、PEZY ZettaScaler和基于可重构加速器的Eyeriss项目。其中包装SIMD(单指令多数据)矢量化引起了人们的注意,尤其是机器学习应用。它可以通过降低计算精度来实现更节能的计算,这足以让ML应用获得低精度计算的结果。换句话说,精确灵活的计算需要允许将一个n位ALU(算术逻辑单元)或一个n位FPU(浮点单元)拆分为多个$M$位单元。例如,双精度(64位操作数宽度)fppu可以拆分为2个单精度(32位操作数宽度)fppu,或4个半精度(16位操作数宽度)fppu。因此,与执行一个原始操作不同,压缩SIMD矢量化可以同时执行两个或四个降低精度的操作。本文提出了一种压缩SIMD矢量化方法,该方法考虑了聚散叠加节点压缩缓冲(DRAGON2-CB)多核叠加体系结构的动态可编程体系结构。特别是,本文对使用双单精度和四半精度gpu的多核覆盖的封装SIMD与非矢量化双精度版本进行了全面的比较研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Packed SIMD Vectorization of the DRAGON2-CB
For over a half-century, computer architects have explored micro-architecture, instruction set architecture, and system architecture to offer a significant performance boost out of a computing chip. In the micro-architecture, multi-processing and multi-threading arose as fusing highly parallel processing and the growth of semiconductor manufacturing technology. It has caused a paradigm shift in computing chips and led to the many-core processor age, such as NVIDIA GPUs, Movidius Myriad, PEZY ZettaScaler, and the project Eyeriss based on a reconfigurable accelerator. Wherein packed SIMD (Single Instruction Multiple Data) vectorizations attract attention, especially from ML (machine learning) applications. It can achieve more energy-efficient computing by reducing computing precision, which is enough for ML applications to obtain the results with low-accuracy calculations. In other words, accuracy-flexible computing needs to allow splitting off one N-bit ALU (Arithmetic Logic Unit) or one N-bit FPU (Floating-Point Unit) into multiple $M$-bit units. For example, a double-precision (64-bit operands width) FPU can be split into two single-precision (32-bit operands width) FPUs, or four half-precision (16-bit operands width) FPUs. Consequently, instead of executing one original operation, a packed SIMD vectorization simultaneously enables executing two or four reduced-precision operations. This article proposes a packed SIMD vectorization approach, which considers the Dynamically Reprogrammable Architecture of Gather-scatter Overlay Nodes-Compact Buffering (DRAGON2-CB) many-core overlay architecture. In particular, this article presents a thorough comparative study between packed SIMD using dual single-precision and quad half-precision FPU-only many-core overlays compared to the non-vectorized double-precision version.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Driver Status Monitoring System with Feedback from Fatigue Detection and Lane Line Detection Efficient and High-Performance Sparse Matrix-Vector Multiplication on a Many-Core Array Impact of Programming Language Skills in Programming Learning Composite Lightweight Authenticated Encryption Based on LED Block Cipher and PHOTON Hash Function for IoT Devices Message from the Chairs: Welcome to the 2022 IEEE 15th International Symposium on embedded Multicore/Many-core Systems-on-Chip (IEEE MCSoC-2022)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1