Survey on memory management techniques in heterogeneous computing systems

IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IET Computers and Digital Techniques Pub Date : 2020-01-21 DOI:10.1049/iet-cdt.2019.0092
Anakhi Hazarika, Soumyajit Poddar, Hafizur Rahaman
{"title":"Survey on memory management techniques in heterogeneous computing systems","authors":"Anakhi Hazarika,&nbsp;Soumyajit Poddar,&nbsp;Hafizur Rahaman","doi":"10.1049/iet-cdt.2019.0092","DOIUrl":null,"url":null,"abstract":"<div>\n <p>A major issue faced by data scientists today is how to scale up their processing infrastructure to meet the challenge of big data and high-performance computing (HPC) workloads. With today's HPC domain, it is required to connect multiple graphics processing units (GPUs) to accomplish large-scale parallel computing along with CPUs. Data movement between the processor and on-chip or off-chip memory creates a major bottleneck in overall system performance. The CPU/GPU processes all the data on a computer's memory and hence the speed of the data movement to/from memory and the size of the memory affect computer speed. During memory access by any processing element, the memory management unit (MMU) controls the data flow of the computer's main memory and impacts the system performance and power. Change in dynamic random access memory (DRAM) architecture, integration of memory-centric hardware accelerator in the heterogeneous system and Processing-in-Memory (PIM) are the techniques adopted from all the available shared resource management techniques to maximise the system throughput. This survey study presents an analysis of various DRAM designs and their performances. The authors also focus on the architecture, functionality, and performance of different hardware accelerators and PIM systems to reduce memory access time. Some insights and potential directions toward enhancements to existing techniques are also discussed. The requirement of fast, reconfigurable, self-adaptive memory management schemes in the high-speed processing scenario motivates us to track the trend. An effective MMU handles memory protection, cache control and bus arbitration associated with the processors.</p>\n </div>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"14 2","pages":"47-60"},"PeriodicalIF":1.1000,"publicationDate":"2020-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1049/iet-cdt.2019.0092","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Computers and Digital Techniques","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/iet-cdt.2019.0092","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 11

Abstract

A major issue faced by data scientists today is how to scale up their processing infrastructure to meet the challenge of big data and high-performance computing (HPC) workloads. With today's HPC domain, it is required to connect multiple graphics processing units (GPUs) to accomplish large-scale parallel computing along with CPUs. Data movement between the processor and on-chip or off-chip memory creates a major bottleneck in overall system performance. The CPU/GPU processes all the data on a computer's memory and hence the speed of the data movement to/from memory and the size of the memory affect computer speed. During memory access by any processing element, the memory management unit (MMU) controls the data flow of the computer's main memory and impacts the system performance and power. Change in dynamic random access memory (DRAM) architecture, integration of memory-centric hardware accelerator in the heterogeneous system and Processing-in-Memory (PIM) are the techniques adopted from all the available shared resource management techniques to maximise the system throughput. This survey study presents an analysis of various DRAM designs and their performances. The authors also focus on the architecture, functionality, and performance of different hardware accelerators and PIM systems to reduce memory access time. Some insights and potential directions toward enhancements to existing techniques are also discussed. The requirement of fast, reconfigurable, self-adaptive memory management schemes in the high-speed processing scenario motivates us to track the trend. An effective MMU handles memory protection, cache control and bus arbitration associated with the processors.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
异构计算系统内存管理技术综述
当今数据科学家面临的一个主要问题是如何扩大他们的处理基础设施,以应对大数据和高性能计算(HPC)工作负载的挑战。对于今天的HPC领域,需要连接多个图形处理单元(GPU)来与CPU一起完成大规模并行计算。处理器和片上或片外存储器之间的数据移动造成了整个系统性能的主要瓶颈。CPU/GPU处理计算机存储器上的所有数据,因此数据移动到存储器/从存储器移动的速度和存储器的大小影响计算机速度。在任何处理元件访问内存期间,内存管理单元(MMU)控制计算机主内存的数据流,并影响系统性能和功率。动态随机存取存储器(DRAM)架构的变化、异构系统中以存储器为中心的硬件加速器的集成以及存储器中的处理(PIM)是从所有可用的共享资源管理技术中采用的技术,以最大限度地提高系统吞吐量。这项调查研究对各种DRAM设计及其性能进行了分析。作者还关注不同硬件加速器和PIM系统的架构、功能和性能,以减少内存访问时间。还讨论了增强现有技术的一些见解和潜在方向。高速处理场景中对快速、可重构、自适应内存管理方案的需求促使我们跟踪趋势。有效的MMU处理与处理器相关的内存保护、缓存控制和总线仲裁。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IET Computers and Digital Techniques
IET Computers and Digital Techniques 工程技术-计算机:理论方法
CiteScore
3.50
自引率
0.00%
发文量
12
审稿时长
>12 weeks
期刊介绍: IET Computers & Digital Techniques publishes technical papers describing recent research and development work in all aspects of digital system-on-chip design and test of electronic and embedded systems, including the development of design automation tools (methodologies, algorithms and architectures). Papers based on the problems associated with the scaling down of CMOS technology are particularly welcome. It is aimed at researchers, engineers and educators in the fields of computer and digital systems design and test. The key subject areas of interest are: Design Methods and Tools: CAD/EDA tools, hardware description languages, high-level and architectural synthesis, hardware/software co-design, platform-based design, 3D stacking and circuit design, system on-chip architectures and IP cores, embedded systems, logic synthesis, low-power design and power optimisation. Simulation, Test and Validation: electrical and timing simulation, simulation based verification, hardware/software co-simulation and validation, mixed-domain technology modelling and simulation, post-silicon validation, power analysis and estimation, interconnect modelling and signal integrity analysis, hardware trust and security, design-for-testability, embedded core testing, system-on-chip testing, on-line testing, automatic test generation and delay testing, low-power testing, reliability, fault modelling and fault tolerance. Processor and System Architectures: many-core systems, general-purpose and application specific processors, computational arithmetic for DSP applications, arithmetic and logic units, cache memories, memory management, co-processors and accelerators, systems and networks on chip, embedded cores, platforms, multiprocessors, distributed systems, communication protocols and low-power issues. Configurable Computing: embedded cores, FPGAs, rapid prototyping, adaptive computing, evolvable and statically and dynamically reconfigurable and reprogrammable systems, reconfigurable hardware. Design for variability, power and aging: design methods for variability, power and aging aware design, memories, FPGAs, IP components, 3D stacking, energy harvesting. Case Studies: emerging applications, applications in industrial designs, and design frameworks.
期刊最新文献
E-Commerce Logistics Software Package Tracking and Route Planning and Optimization System of Embedded Technology Based on the Intelligent Era A Configurable Accelerator for CNN-Based Remote Sensing Object Detection on FPGAs A FPGA Accelerator of Distributed A3C Algorithm with Optimal Resource Deployment An Efficient RTL Design for a Wearable Brain–Computer Interface Adaptive Shrink and Shard Architecture Design for Blockchain Storage Efficiency
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1