Swift-CNN: Leveraging PCM Memory’s Fast Write Mode to Accelerate CNNs

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Embedded Systems Letters Pub Date : 2023-09-25 DOI:10.1109/LES.2023.3298742
Lokesh Siddhu;Hassan Nassar;Lars Bauer;Christian Hakert;Nils Hölscher;Jian-Jia Chen;Joerg Henkel
{"title":"Swift-CNN: Leveraging PCM Memory’s Fast Write Mode to Accelerate CNNs","authors":"Lokesh Siddhu;Hassan Nassar;Lars Bauer;Christian Hakert;Nils Hölscher;Jian-Jia Chen;Joerg Henkel","doi":"10.1109/LES.2023.3298742","DOIUrl":null,"url":null,"abstract":"Nonvolatile memories [especially phase change memories (PCMs)] offer scalability and higher density. However, reduced write performance has limited their use as main memory. Researchers have explored using the fast write mode available in PCM to alleviate the challenges. The fast write mode offers lower write latency and energy consumption. However, the fast-written data are retained for a limited time and need to be refreshed. Prior works perform fast writes when the memory is busy and use slow writes to refresh the data during memory idle phases. Such policies do not consider the retention time requirement of a variable and repeat all the writes made during the busy phase. In this work, we suggest a retention-time-aware selection of write modes. As a case study, we use convolutional neural networks (CNNs) and present a novel algorithm, Swift-CNN, that assesses each CNN layer’s memory access behavior and retention time requirement and suggests an appropriate PCM write mode. Our results show that Swift-CNN decreases inference and training execution time and memory energy compared to state-of-the-art techniques and achieves execution time close to the ideal (fast write-only) policy.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"15 4","pages":"234-237"},"PeriodicalIF":1.7000,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Embedded Systems Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10194314/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Nonvolatile memories [especially phase change memories (PCMs)] offer scalability and higher density. However, reduced write performance has limited their use as main memory. Researchers have explored using the fast write mode available in PCM to alleviate the challenges. The fast write mode offers lower write latency and energy consumption. However, the fast-written data are retained for a limited time and need to be refreshed. Prior works perform fast writes when the memory is busy and use slow writes to refresh the data during memory idle phases. Such policies do not consider the retention time requirement of a variable and repeat all the writes made during the busy phase. In this work, we suggest a retention-time-aware selection of write modes. As a case study, we use convolutional neural networks (CNNs) and present a novel algorithm, Swift-CNN, that assesses each CNN layer’s memory access behavior and retention time requirement and suggests an appropriate PCM write mode. Our results show that Swift-CNN decreases inference and training execution time and memory energy compared to state-of-the-art techniques and achieves execution time close to the ideal (fast write-only) policy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Swift-CNN:利用PCM内存的快速写入模式来加速cnn
非易失性存储器(特别是相变存储器,PCM)提供可扩展性和更高的密度。然而,写性能的降低限制了它们作为主存的使用。研究人员已经探索了使用PCM中可用的快速写入模式来缓解挑战。快速写模式具有较低的写时延和较低的写能耗。但是,快速写入的数据保留有限的时间,并且需要刷新。先前的工作在内存繁忙时执行快速写入,并在内存空闲阶段使用慢速写入来刷新数据。这种策略不考虑变量的保留时间要求,而是重复繁忙阶段的所有写操作。在这项工作中,我们建议对写入模式进行保留时间感知选择。作为案例研究,我们使用卷积神经网络(CNN)并提出了一种新的算法Swift-CNN,该算法评估每个CNN层的内存访问行为和保留时间要求,并建议适当的PCM写入模式。我们的结果表明,与最先进的技术相比,Swift-CNN减少了推理和训练的执行时间和内存能量,并实现了接近理想(快速只写)策略的执行时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Embedded Systems Letters
IEEE Embedded Systems Letters Engineering-Control and Systems Engineering
CiteScore
3.30
自引率
0.00%
发文量
65
期刊介绍: The IEEE Embedded Systems Letters (ESL), provides a forum for rapid dissemination of latest technical advances in embedded systems and related areas in embedded software. The emphasis is on models, methods, and tools that ensure secure, correct, efficient and robust design of embedded systems and their applications.
期刊最新文献
Time-Sensitive Networking in Low Latency Cyber-Physical Systems FedTinyWolf -A Memory Efficient Federated Embedded Learning Mechanism SCALLER: Standard Cell Assembled and Local Layout Effect-Based Ring Oscillators Table of Contents IEEE Embedded Systems Letters Publication Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1