Pub Date : 2025-08-01DOI: 10.1109/LCA.2025.3595003
Nayana Rajeev;Cathrene Biju;Titu Mary Ignatius;Roy Paily Palathinkal;Rekha K James
This paper presents RAESC, a reconfigurable Advanced Encryption Standard (AES) countermeasure hardware design that supports AES-128, AES-192, and AES-256 types, enhancing flexibility and resource efficiency in IoT applications. The design incorporates a countermeasure to protect against Power-based Side Channel Attacks (PSCA) by randomizing the AES type based on input plaintext, ensuring improved security. The RAESC is integrated with an RV32IM RISC-V processor, offering streamlined operation and enhanced system security. Performance analysis shows that RAESC’s adaptive encryption strength achieves a balanced trade-off in area, power, and throughput, making it ideal for resource-constrained, security-sensitive IoT applications. Power traces for CPA attacks are generated on Application Specific Integrated Circuit (ASIC) and the design achieves a notable reduction in the Signal to Noise Ratio (SNR) and an increase in the Measurements to Disclose (MTD), demonstrating strong resilience against cryptographic attacks.
{"title":"RAESC: A Reconfigurable AES Countermeasure Architecture for RISC-V With Enhanced Power Side-Channel Resilience","authors":"Nayana Rajeev;Cathrene Biju;Titu Mary Ignatius;Roy Paily Palathinkal;Rekha K James","doi":"10.1109/LCA.2025.3595003","DOIUrl":"https://doi.org/10.1109/LCA.2025.3595003","url":null,"abstract":"This paper presents RAESC, a reconfigurable Advanced Encryption Standard (AES) countermeasure hardware design that supports AES-128, AES-192, and AES-256 types, enhancing flexibility and resource efficiency in IoT applications. The design incorporates a countermeasure to protect against Power-based Side Channel Attacks (PSCA) by randomizing the AES type based on input plaintext, ensuring improved security. The RAESC is integrated with an RV32IM RISC-V processor, offering streamlined operation and enhanced system security. Performance analysis shows that RAESC’s adaptive encryption strength achieves a balanced trade-off in area, power, and throughput, making it ideal for resource-constrained, security-sensitive IoT applications. Power traces for CPA attacks are generated on Application Specific Integrated Circuit (ASIC) and the design achieves a notable reduction in the Signal to Noise Ratio (SNR) and an increase in the Measurements to Disclose (MTD), demonstrating strong resilience against cryptographic attacks.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"273-276"},"PeriodicalIF":1.4,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-31DOI: 10.1109/LCA.2025.3594110
Mengting Zhang;Zhichuan Guo;Shining Sun
Remote Direct Memory Access (RDMA) enables low-latency datacenter networks but suffers from inefficient loss recovery using Go-Back-N (GBN). GBN retransmits entire packet windows, degrading Flow Completion Time (FCT) under congestion. We introduce RoSR, a novel selective retransmission architecture for Field-Programmable Gate Array (FPGA)-based RDMA NICs that supports hardware-accelerated direct writes of out-of-order (OoO) packets. RoSR supports efficient OoO packet reception and enables fine-grained retransmission using a dynamic shared bitmap for packet tracking. By extending the RDMA over Converged Ethernet version 2 (RoCEv2) packet format, RoSR facilitates selective retransmission. It triggers retransmissions via timeouts using bitmap blocks and introduces new Nack-bitmap and rd-req-bitmap messages for loss reporting. Under 1% packet loss, RoSR achieves up to 13.5× (RDMA Write) and 15.6× (RDMA Read) higher throughput than Xilinx ERNIC. In NS-3 simulations using the HPCC RDMA stack, RoSR reduces FCT slowdown by 3× to 6× compared to GBN across various packet loss rates, congestion control algorithms (DCQCN, HPCC, Timely), and traffic patterns, while maintaining robustness under high round-trip time (RTT) conditions.
远程直接内存访问(RDMA)支持低延迟的数据中心网络,但使用Go-Back-N (GBN)时存在效率低下的损失恢复问题。GBN重传整个数据包窗口,降低了拥塞下的流完成时间(FCT)。我们介绍了RoSR,一种新的选择性重传架构,用于基于现场可编程门阵列(FPGA)的RDMA网卡,支持硬件加速的乱序(OoO)数据包的直接写入。RoSR支持有效的OoO数据包接收,并使用动态共享位图实现数据包跟踪的细粒度重传。通过扩展RDMA over Converged Ethernet version 2 (RoCEv2)数据包格式,RoSR促进了选择性重传。它通过使用位图块的超时触发重传,并为丢失报告引入了新的ack-bitmap和rd-req-bitmap消息。在丢包率为1%的情况下,RoSR的吞吐量比Xilinx ERNIC高13.5倍(RDMA Write)和15.6倍(RDMA Read)。在使用HPCC RDMA堆栈的NS-3模拟中,与GBN相比,RoSR在各种丢包率、拥塞控制算法(DCQCN、HPCC、Timely)和流量模式下将FCT减速减少了3到6倍,同时在高往返时间(RTT)条件下保持鲁棒性。
{"title":"RoSR: A Novel Selective Retransmission FPGA Architecture for RDMA NICs","authors":"Mengting Zhang;Zhichuan Guo;Shining Sun","doi":"10.1109/LCA.2025.3594110","DOIUrl":"https://doi.org/10.1109/LCA.2025.3594110","url":null,"abstract":"Remote Direct Memory Access (RDMA) enables low-latency datacenter networks but suffers from inefficient loss recovery using Go-Back-N (GBN). GBN retransmits entire packet windows, degrading Flow Completion Time (FCT) under congestion. We introduce RoSR, a novel selective retransmission architecture for Field-Programmable Gate Array (FPGA)-based RDMA NICs that supports hardware-accelerated direct writes of out-of-order (OoO) packets. RoSR supports efficient OoO packet reception and enables fine-grained retransmission using a dynamic shared bitmap for packet tracking. By extending the RDMA over Converged Ethernet version 2 (RoCEv2) packet format, RoSR facilitates selective retransmission. It triggers retransmissions via timeouts using bitmap blocks and introduces new Nack-bitmap and rd-req-bitmap messages for loss reporting. Under 1% packet loss, RoSR achieves up to 13.5× (RDMA Write) and 15.6× (RDMA Read) higher throughput than Xilinx ERNIC. In NS-3 simulations using the HPCC RDMA stack, RoSR reduces FCT slowdown by 3× to 6× compared to GBN across various packet loss rates, congestion control algorithms (DCQCN, HPCC, Timely), and traffic patterns, while maintaining robustness under high round-trip time (RTT) conditions.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"269-272"},"PeriodicalIF":1.4,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-24DOI: 10.1109/LCA.2025.3592563
Kwanhee Kyung;Sungmin Yun;Jung Ho Ahn
Large Language Models (LLMs) applying Mixture-of-Experts (MoE) scale to trillions of parameters but require vast memory, motivating a line of research to offload expert weights from fast-but-small DRAM (HBM) to denser Flash SSDs. While SSDs provide cost-effective capacity, their read energy per bit is substantially higher than that of DRAM. This paper quantitatively analyzes the energy implications of offloading MoE expert weights to SSDs during the critical decode stage of LLM inference. Our analysis, comparing SSD, CPU memory (DDR), and HBM storage scenarios for models like DeepSeek-R1, reveals that offloading MoE weights to current SSDs drastically increases per-token-generation energy consumption (e.g., by up to $sim 12times$ compared to the HBM baseline), dominating the total inference energy budget. Although techniques like prefetching effectively hide access latency, they cannot mitigate this fundamental energy penalty. We further explore future technological scaling, finding that the inherent sparsity of MoE models could potentially make SSDs energy-viable if Flash read energy improves significantly, roughly by an order of magnitude.
{"title":"SSD Offloading for LLM Mixture-of-Experts Weights Considered Harmful in Energy Efficiency","authors":"Kwanhee Kyung;Sungmin Yun;Jung Ho Ahn","doi":"10.1109/LCA.2025.3592563","DOIUrl":"https://doi.org/10.1109/LCA.2025.3592563","url":null,"abstract":"Large Language Models (LLMs) applying Mixture-of-Experts (MoE) scale to trillions of parameters but require vast memory, motivating a line of research to offload expert weights from fast-but-small DRAM (HBM) to denser Flash SSDs. While SSDs provide cost-effective capacity, their read energy per bit is substantially higher than that of DRAM. This paper quantitatively analyzes the energy implications of offloading MoE expert weights to SSDs during the critical decode stage of LLM inference. Our analysis, comparing SSD, CPU memory (DDR), and HBM storage scenarios for models like DeepSeek-R1, reveals that offloading MoE weights to current SSDs drastically increases per-token-generation energy consumption (e.g., by up to <inline-formula><tex-math>$sim 12times$</tex-math></inline-formula> compared to the HBM baseline), dominating the total inference energy budget. Although techniques like prefetching effectively hide access latency, they cannot mitigate this fundamental energy penalty. We further explore future technological scaling, finding that the inherent sparsity of MoE models could potentially make SSDs energy-viable <i>if</i> Flash read energy improves significantly, roughly by an order of magnitude.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"265-268"},"PeriodicalIF":1.4,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-23DOI: 10.1109/LCA.2025.3542809
Bhargav Reddy Godala;Sankara Prasad Ramesh;Krishnam Tibrewala;Chrysanthos Pepi;Gino Chacon;Svilen Kanev;Gilles A. Pokam;Alberto Ros;Daniel A. Jiménez;Paul V. Gratz;David I. August
Modern OOO CPUs have very deep pipelines with large branch misprediction recovery penalties. Speculatively executed instructions on the wrong path can significantly change cache state, depending on speculation levels. Architects often employ trace-driven simulation models in the design exploration stage, which sacrifice precision for speed. Trace-driven simulators are orders of magnitude faster than execution-driven models, reducing the often hundreds of thousands of simulation hours needed to explore new micro-architectural ideas. Despite the strong benefits of trace-driven simulation, it often fails to adequately model the consequences of wrong-path execution because obtaining such traces from real systems is nontrivial. Prior works exclusively consider either pollution or prefetching in the instruction stream/L1-I cache and often ignore the impact on the data stream. Here, we examine wrong path execution in simulation results and design a set of infrastructure for enabling wrong-path execution in a trace driven simulator. Our analysis shows the wrong path affects structures on both the instruction and data sides extensively, resulting in performance variations ranging from $-3.05$% to 20.9% versus ignoring wrong path. To benefit the research community and enhance the accuracy of simulators, we opened our traces and tracing utility in the hopes that industry can provide wrong-path traces generated by their internal simulators, enabling academic simulation without exposing industry IP.
{"title":"Correct Wrong Path","authors":"Bhargav Reddy Godala;Sankara Prasad Ramesh;Krishnam Tibrewala;Chrysanthos Pepi;Gino Chacon;Svilen Kanev;Gilles A. Pokam;Alberto Ros;Daniel A. Jiménez;Paul V. Gratz;David I. August","doi":"10.1109/LCA.2025.3542809","DOIUrl":"https://doi.org/10.1109/LCA.2025.3542809","url":null,"abstract":"Modern OOO CPUs have very deep pipelines with large branch misprediction recovery penalties. Speculatively executed instructions on the wrong path can significantly change cache state, depending on speculation levels. Architects often employ trace-driven simulation models in the design exploration stage, which sacrifice precision for speed. Trace-driven simulators are orders of magnitude faster than execution-driven models, reducing the often hundreds of thousands of simulation hours needed to explore new micro-architectural ideas. Despite the strong benefits of trace-driven simulation, it often fails to adequately model the consequences of wrong-path execution because obtaining such traces from real systems is nontrivial. Prior works exclusively consider either pollution or prefetching in the instruction stream/L1-I cache and often ignore the impact on the data stream. Here, we examine wrong path execution in simulation results and design a set of infrastructure for enabling wrong-path execution in a trace driven simulator. Our analysis shows the wrong path affects structures on both the instruction and data sides extensively, resulting in performance variations ranging from <inline-formula><tex-math>$-3.05$</tex-math></inline-formula>% to 20.9% versus ignoring wrong path. To benefit the research community and enhance the accuracy of simulators, we opened our traces and tracing utility in the hopes that industry can provide wrong-path traces generated by their internal simulators, enabling academic simulation without exposing industry IP.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"221-224"},"PeriodicalIF":1.4,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Per-Row Activation Counting (PRAC), a DRAM read disturbance mitigation method, modifies key DRAM timing parameters, reportedly causing significant performance overheads in simulator-based studies. However, given known discrepancies between simulators and real hardware, real-machine experiments are vital for accurate PRAC performance estimation. We present the first real-machine performance analysis of PRAC. After verifying timing modifications on the latest CPUs using microbenchmarks, our analysis shows that PRAC’s average and maximum overheads are just 1.06% and 3.28% for the SPEC CPU2017 workloads—up to 9.15× lower than simulator-based reports. Further, we show that the close page policy minimizes this overhead by effectively hiding the elongated DRAM row precharge operations due to PRAC from the critical path.
{"title":"Per-Row Activation Counting on Real Hardware: Demystifying Performance Overheads","authors":"Jumin Kim;Seungmin Baek;Minbok Wi;Hwayong Nam;Michael Jaemin Kim;Sukhan Lee;Kyomin Sohn;Jung Ho Ahn","doi":"10.1109/LCA.2025.3587293","DOIUrl":"https://doi.org/10.1109/LCA.2025.3587293","url":null,"abstract":"Per-Row Activation Counting (PRAC), a DRAM read disturbance mitigation method, modifies key DRAM timing parameters, reportedly causing significant performance overheads in simulator-based studies. However, given known discrepancies between simulators and real hardware, real-machine experiments are vital for accurate PRAC performance estimation. We present the first real-machine performance analysis of PRAC. After verifying timing modifications on the latest CPUs using microbenchmarks, our analysis shows that PRAC’s average and maximum overheads are just 1.06% and 3.28% for the SPEC CPU2017 workloads—up to 9.15× lower than simulator-based reports. Further, we show that the close page policy minimizes this overhead by effectively hiding the elongated DRAM row precharge operations due to PRAC from the critical path.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"217-220"},"PeriodicalIF":1.4,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10DOI: 10.1109/LCA.2025.3587582
Ruihao Li;Lizy K. John;Neeraja J. Yadwadkar
Memory allocators, though constituting a small portion of the entire program code, can significantly impact application performance by affecting global factors such as cache behaviors. Moreover, memory allocators are often regarded as a “datacenter tax” inherent to all programs. Even a 1% improvement in performance can lead to significant cost and energy savings when scaled across an entire datacenter fleet. Modern memory allocators are designed to optimize allocation speed and memory fragmentation in multi-threaded environments, relying on complex metadata and control logic to achieve high performance. However, the overhead introduced by this complexity prompts a reevaluation of allocator design. Notably, such overhead can be avoided in single-threaded scenarios, which continue to be widely used across diverse application domains. In this paper, we present ExGen-Malloc, a memory allocator specifically optimized for single-threaded applications. We prototyped ExGen-Malloc on a real system and demonstrated that it achieves a geometric mean speedup of $1.19 times$ over dlmalloc and $1.03 times$ over mimalloc, a modern multi-threaded allocator developed by Microsoft, on the SPEC CPU2017 benchmark suite.
{"title":"Old is Gold: Optimizing Single-Threaded Applications With ExGen-Malloc","authors":"Ruihao Li;Lizy K. John;Neeraja J. Yadwadkar","doi":"10.1109/LCA.2025.3587582","DOIUrl":"https://doi.org/10.1109/LCA.2025.3587582","url":null,"abstract":"Memory allocators, though constituting a small portion of the entire program code, can significantly impact application performance by affecting global factors such as cache behaviors. Moreover, memory allocators are often regarded as a “datacenter tax” inherent to all programs. Even a 1% improvement in performance can lead to significant cost and energy savings when scaled across an entire datacenter fleet. Modern memory allocators are designed to optimize allocation speed and memory fragmentation in multi-threaded environments, relying on complex metadata and control logic to achieve high performance. However, the overhead introduced by this complexity prompts a reevaluation of allocator design. Notably, such overhead can be avoided in single-threaded scenarios, which continue to be widely used across diverse application domains. In this paper, we present <i>ExGen-Malloc</i>, a memory allocator specifically optimized for single-threaded applications. We prototyped <i>ExGen-Malloc</i> on a real system and demonstrated that it achieves a geometric mean speedup of <inline-formula><tex-math>$1.19 times$</tex-math></inline-formula> over dlmalloc and <inline-formula><tex-math>$1.03 times$</tex-math></inline-formula> over mimalloc, a modern multi-threaded allocator developed by Microsoft, on the SPEC CPU2017 benchmark suite.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"225-228"},"PeriodicalIF":1.4,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/LCA.2025.3586312
Xueyang Liu;Seonjin Na;Euijun Chung;Jiashen Cao;Jing Yang;Hyesoon Kim
The growing dataset sizes in LLM have made low-cost SSDs a popular solution for extending GPU memory in mobile devices. In this paper, we introduce CA-Scheduler, a contention-aware scheduling scheme for GPU-initiated SSD access. The key insight behind CA-Scheduler is leveraging the BSP GPU programming model, which allows reordering work at the thread block level to optimize SSD throughput. By capitalizing on the predictable memory access patterns of GPU thread blocks, CA-Scheduler anticipates SSD locations to minimize contention and improve performance.
{"title":"Contention-Aware GPU Thread Block Scheduler for Efficient GPU-SSD","authors":"Xueyang Liu;Seonjin Na;Euijun Chung;Jiashen Cao;Jing Yang;Hyesoon Kim","doi":"10.1109/LCA.2025.3586312","DOIUrl":"https://doi.org/10.1109/LCA.2025.3586312","url":null,"abstract":"The growing dataset sizes in LLM have made low-cost SSDs a popular solution for extending GPU memory in mobile devices. In this paper, we introduce <monospace>CA-Scheduler</monospace>, a contention-aware scheduling scheme for GPU-initiated SSD access. The key insight behind <monospace>CA-Scheduler</monospace> is leveraging the BSP GPU programming model, which allows reordering work at the thread block level to optimize SSD throughput. By capitalizing on the predictable memory access patterns of GPU thread blocks, <monospace>CA-Scheduler</monospace> anticipates SSD locations to minimize contention and improve performance.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"257-260"},"PeriodicalIF":1.4,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144896798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-27DOI: 10.1109/LCA.2025.3583758
Kwangrae Kim;Ki-Seok Chung
Sparse matrix-matrix multiplication (SpGEMM) is widely used in various scientific computing applications. However, the performance of SpGEMM is typically bound by memory performance due to irregular access patterns. Prior accelerators leveraging high-bandwidth memory (HBM) with optimized data flows still face limitations in handling sparse matrices with varying sizes and sparsity levels. We propose HPN-SpGEMM, a hybrid architecture that employs both processing-in-memory (PIM) cores inside bank groups and near-memory-processing (NMP) cores in the logic die of an HBM memory. To the best of our knowledge, this is the first hybrid architecture for SpGEMM that leverages both PIM cores and NMP cores. Evaluation results demonstrate significant performance gains, effectively overcoming memory-bound constraints.
{"title":"HPN-SpGEMM: Hybrid PIM-NMP for SpGEMM","authors":"Kwangrae Kim;Ki-Seok Chung","doi":"10.1109/LCA.2025.3583758","DOIUrl":"https://doi.org/10.1109/LCA.2025.3583758","url":null,"abstract":"Sparse matrix-matrix multiplication (SpGEMM) is widely used in various scientific computing applications. However, the performance of SpGEMM is typically bound by memory performance due to irregular access patterns. Prior accelerators leveraging high-bandwidth memory (HBM) with optimized data flows still face limitations in handling sparse matrices with varying sizes and sparsity levels. We propose HPN-SpGEMM, a hybrid architecture that employs both processing-in-memory (PIM) cores inside bank groups and near-memory-processing (NMP) cores in the logic die of an HBM memory. To the best of our knowledge, this is the first hybrid architecture for SpGEMM that leverages both PIM cores and NMP cores. Evaluation results demonstrate significant performance gains, effectively overcoming memory-bound constraints.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"209-212"},"PeriodicalIF":1.4,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-24DOI: 10.1109/LCA.2025.3553143
Hyunkyun Shin;Seongtae Bang;Hyungwon Park;Daehoon Kim
As the demand for GPU memory from applications such as machine learning continues to grow exponentially, maximizing GPU memory capacity has become increasingly important. Unified Virtual Memory (UVM), which combines host and GPU memory into a unified address space, allows GPUs to utilize more memory than their physical capacity. However, this advantage comes at the cost of significant overheads when accessing host memory. Although existing prefetching techniques help alleviate these overheads, they still encounter challenges when dealing with irregular workloads and dynamic mixed workloads. In this paper, we demonstrate that the regularity of workloads is strongly correlated with the sharing status of UVM memory blocks among the Streaming Multiprocessors (SMs) of GPUs, which in turn impacts the effectiveness of prefetching. In addition, we propose the Sharing Aware preFEtching technique, SAFE, which dynamically adjusts prefetching strategies based on the sharing status of the accessed memory blocks. SAFE efficiently tracks the sharing status of the memory blocks by leveraging unified TLBs (uTLBs) and enforces tailored prefetching configurations for each block. This approach requires no hardware modifications and incurs negligible performance overhead. Our evaluation shows that SAFE achieves up to a 6.5× performance improvement over UVM default prefetcher for workloads with predominantly irregular memory access patterns, with an average improvement of 3.6×.
{"title":"SAFE: Sharing-Aware Prefetching for Efficient GPU Memory Management With Unified Virtual Memory","authors":"Hyunkyun Shin;Seongtae Bang;Hyungwon Park;Daehoon Kim","doi":"10.1109/LCA.2025.3553143","DOIUrl":"https://doi.org/10.1109/LCA.2025.3553143","url":null,"abstract":"As the demand for GPU memory from applications such as machine learning continues to grow exponentially, maximizing GPU memory capacity has become increasingly important. Unified Virtual Memory (UVM), which combines host and GPU memory into a unified address space, allows GPUs to utilize more memory than their physical capacity. However, this advantage comes at the cost of significant overheads when accessing host memory. Although existing prefetching techniques help alleviate these overheads, they still encounter challenges when dealing with irregular workloads and dynamic mixed workloads. In this paper, we demonstrate that the regularity of workloads is strongly correlated with the sharing status of UVM memory blocks among the Streaming Multiprocessors (SMs) of GPUs, which in turn impacts the effectiveness of prefetching. In addition, we propose the <bold>S</b>haring <bold>A</b>ware pre<bold>FE</b>tching technique, <monospace>SAFE</monospace>, which dynamically adjusts prefetching strategies based on the sharing status of the accessed memory blocks. <monospace>SAFE</monospace> efficiently tracks the sharing status of the memory blocks by leveraging unified TLBs (uTLBs) and enforces tailored prefetching configurations for each block. This approach requires no hardware modifications and incurs negligible performance overhead. Our evaluation shows that <monospace>SAFE</monospace> achieves up to a 6.5× performance improvement over UVM default prefetcher for workloads with predominantly irregular memory access patterns, with an average improvement of 3.6×.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"117-120"},"PeriodicalIF":1.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-23DOI: 10.1109/LCA.2025.3582481
Jiaqi Lou;Yu Li;Srikar Vanavasam;Nam Sung Kim
Recent performance advancements in inter-host networking demand innovations in intra-host communication and SmartNIC-accelerated in-network processing. However, developing novel SmartNIC features remains difficult due to absence of hardware observability and low-cost, deterministic testing environments with existing software-based or commercial development platforms. While FPGA-based SmartNICs offer high flexibility and performance for packet processing acceleration, existing solutions support only a limited subset of network technologies widely used in commercial datacenters. To address these challenges, we introduce HINT, an FPGA-based development and emulation platform that transparently mimics a commercial SmartNIC in the system, featuring controlled network traffic generation with a high-performance traffic engine and kernel-bypass network technologies. It also supports configurable workload patterns, nanosecond-level latency measurement, and a reconfigurable Receive Side Scaling (RSS) engine for load balancing. Our evaluation shows that HINT achieves 91% of PCIe’s theoretical efficiency, providing a highly effective and scalable platform to emulate an end-to-end system with support for diverse network stacks. HINT thus establishes an accessible, high-fidelity platform for SmartNIC development and emulation, along with architectural exploration of intra-host communication.
{"title":"HINT: A Hardware Platform for Intra-Host NIC Traffic and SmartNIC Emulation","authors":"Jiaqi Lou;Yu Li;Srikar Vanavasam;Nam Sung Kim","doi":"10.1109/LCA.2025.3582481","DOIUrl":"https://doi.org/10.1109/LCA.2025.3582481","url":null,"abstract":"Recent performance advancements in inter-host networking demand innovations in intra-host communication and SmartNIC-accelerated in-network processing. However, developing novel SmartNIC features remains difficult due to absence of hardware observability and low-cost, deterministic testing environments with existing software-based or commercial development platforms. While FPGA-based SmartNICs offer high flexibility and performance for packet processing acceleration, existing solutions support only a limited subset of network technologies widely used in commercial datacenters. To address these challenges, we introduce HINT, an FPGA-based development and emulation platform that transparently mimics a commercial SmartNIC in the system, featuring controlled network traffic generation with a high-performance traffic engine and kernel-bypass network technologies. It also supports configurable workload patterns, nanosecond-level latency measurement, and a reconfigurable Receive Side Scaling (RSS) engine for load balancing. Our evaluation shows that HINT achieves 91% of PCIe’s theoretical efficiency, providing a highly effective and scalable platform to emulate an end-to-end system with support for diverse network stacks. HINT thus establishes an accessible, high-fidelity platform for SmartNIC development and emulation, along with architectural exploration of intra-host communication.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"261-264"},"PeriodicalIF":1.4,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11048525","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}