Pub Date : 2025-06-17DOI: 10.1109/LCA.2025.3580562
Wencheng Zou;Feiyun Zhao;Nan Wu
Workload partitioning and mapping are critical to optimizing performance in multi-chiplet systems. However, existing approaches struggle with scalability in large search spaces and lack transferability across different workloads. To overcome these limitations, we propose Stardust, a scalable and transferable workload mapping on multi-chiplet systems. Stardust combines learnable graph clustering to downscale computation graphs for efficient partitioning, topology-masked attention to capture structural information, and deep reinforcement learning (DRL) for optimized workload mapping. Evaluations on production-scale AI models show that (1) Stardust-generated mappings significantly outperform commonly used heuristics in throughput, and (2) fine-tuning a pre-trained Stardust model improves sample efficiency by up to 15× compared to training from scratch.
{"title":"Stardust: Scalable and Transferable Workload Mapping for Large AI on Multi-Chiplet Systems","authors":"Wencheng Zou;Feiyun Zhao;Nan Wu","doi":"10.1109/LCA.2025.3580562","DOIUrl":"https://doi.org/10.1109/LCA.2025.3580562","url":null,"abstract":"Workload partitioning and mapping are critical to optimizing performance in multi-chiplet systems. However, existing approaches struggle with scalability in large search spaces and lack transferability across different workloads. To overcome these limitations, we propose <sc>Stardust</small>, a <underline>s</u>calable and <underline>t</u>r<underline>a</u>nsfe<underline>r</u>able workloa<underline>d</u> mapping on m<underline>u</u>lti-chiplet sy<underline>st</u>ems. <sc>Stardust</small> combines learnable graph clustering to downscale computation graphs for efficient partitioning, topology-masked attention to capture structural information, and deep reinforcement learning (DRL) for optimized workload mapping. Evaluations on production-scale AI models show that (1) <sc>Stardust</small>-generated mappings significantly outperform commonly used heuristics in throughput, and (2) fine-tuning a pre-trained <sc>Stardust</small> model improves sample efficiency by up to 15× compared to training from scratch.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"201-204"},"PeriodicalIF":1.4,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-06DOI: 10.1109/LCA.2025.3577232
Jongmin Shin;Seongtae Bang;Gyeongseo Park;Daehoon Kim
Modern server processors in data centers equipped with high-performance networking technologies (e.g., 100 Gigabit Ethernet) commonly support parallel packet processing via multi-queue NICs, enabling multiple cores to efficiently handle massive traffic loads. However, existing architectural simulators such as gem5 lack support for these techniques and suffer from limited bandwidth due to outdated networking models. Although a recent study introduced a simulation framework supporting userspace high-performance networking via the Data Plane Development Kit (DPDK), many applications still rely on kernel-based networking. To address these limitations, we present pNet-gem5, a full-system simulation framework designed to model server systems under high-performance network workloads, targeting data center architecture research. pNet-gem5 extends gem5 by supporting parallel packet processing on multi-core systems through the integration of multiple hardware queues and a more advanced interrupt mechanism—Message Signaled Interrupts (MSI)—which allows each NIC queue to be mapped to a dedicated core with its own IRQ. It also provides a high-performance network interface and device driver that support scalable and configurable packet distribution between hardware and software. Moreover, by decoupling packet distribution and scheduling from NIC core logic, pNet-gem5 enables flexible experimentation with custom policies. As a result, pNet-gem5 enables more realistic simulation of modern server environments by modeling multi-queue NICs and supporting bandwidths up to 46 Gbps—a significant improvement over the previous limit of only a few Gbps and more closely aligned with today’s tens-of-Gbps networks.
配备高性能网络技术(例如,100千兆以太网)的数据中心中的现代服务器处理器通常支持通过多队列网卡并行数据包处理,使多个核心能够有效地处理大量流量负载。然而,现有的架构模拟器(如gem5)缺乏对这些技术的支持,并且由于过时的网络模型而受到带宽限制。尽管最近的一项研究引入了一个模拟框架,通过数据平面开发工具包(Data Plane Development Kit, DPDK)支持用户空间高性能网络,但许多应用程序仍然依赖于基于内核的网络。为了解决这些限制,我们提出了pNet-gem5,这是一个全系统仿真框架,旨在对高性能网络工作负载下的服务器系统进行建模,目标是数据中心架构研究。pNet-gem5扩展了gem5,通过集成多个硬件队列和更高级的中断机制——消息信号中断(message signaling Interrupts, MSI)——在多核系统上支持并行数据包处理,MSI允许每个NIC队列被映射到具有自己IRQ的专用核心。它还提供了一个高性能的网络接口和设备驱动程序,支持硬件和软件之间可伸缩和可配置的数据包分发。此外,通过将数据包分发和调度与网卡核心逻辑解耦,pNet-gem5支持灵活的自定义策略实验。因此,pNet-gem5通过建模多队列nic并支持高达46 Gbps的带宽,从而能够更逼真地模拟现代服务器环境,这比以前仅为几Gbps的限制有了重大改进,并且与今天的数十Gbps网络更加接近。
{"title":"pNet-gem5: Full-System Simulation With High-Performance Networking Enabled by Parallel Network Packet Processing","authors":"Jongmin Shin;Seongtae Bang;Gyeongseo Park;Daehoon Kim","doi":"10.1109/LCA.2025.3577232","DOIUrl":"https://doi.org/10.1109/LCA.2025.3577232","url":null,"abstract":"Modern server processors in data centers equipped with high-performance networking technologies (e.g., 100 Gigabit Ethernet) commonly support parallel packet processing via multi-queue NICs, enabling multiple cores to efficiently handle massive traffic loads. However, existing architectural simulators such as <monospace>gem5</monospace> lack support for these techniques and suffer from limited bandwidth due to outdated networking models. Although a recent study introduced a simulation framework supporting userspace high-performance networking via the Data Plane Development Kit (DPDK), many applications still rely on kernel-based networking. To address these limitations, we present <monospace>pNet-gem5</monospace>, a full-system simulation framework designed to model server systems under high-performance network workloads, targeting data center architecture research. <monospace>pNet-gem5</monospace> extends <monospace>gem5</monospace> by supporting parallel packet processing on multi-core systems through the integration of multiple hardware queues and a more advanced interrupt mechanism—Message Signaled Interrupts (MSI)—which allows each NIC queue to be mapped to a dedicated core with its own IRQ. It also provides a high-performance network interface and device driver that support scalable and configurable packet distribution between hardware and software. Moreover, by decoupling packet distribution and scheduling from NIC core logic, <monospace>pNet-gem5</monospace> enables flexible experimentation with custom policies. As a result, <monospace>pNet-gem5</monospace> enables more realistic simulation of modern server environments by modeling multi-queue NICs and supporting bandwidths up to 46 Gbps—a significant improvement over the previous limit of only a few Gbps and more closely aligned with today’s tens-of-Gbps networks.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"193-196"},"PeriodicalIF":1.4,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144536558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-05DOI: 10.1109/LCA.2025.3576891
Jaime Roelandts;Ajeya Naithani;Lieven Eeckhout
Computing devices are responsible for a significant fraction of the world’s total carbon footprint. Designing sustainable systems is a challenging endeavor because of the huge design space, the complex objective function, and the inherent data uncertainty. To make matters worse, a design that seems sustainable at first, might turn out to not be when taking rebound effects into account. In this paper, we propose the Architectural Sustainability Indicator (ASI), a novel metric to assess the sustainability of a given design and determine whether it is strongly, weakly, or unsustainable. ASI provides insight and hints for turning unsustainable and weakly sustainable design points into strongly sustainable ones that are robust against potential rebound effects. A case study illustrates how ASI steers Scalar Vector Runahead, a weakly sustainable hardware prefetching technique, into a strongly sustainable one while offering a 3.2× performance boost.
{"title":"The Architectural Sustainability Indicator","authors":"Jaime Roelandts;Ajeya Naithani;Lieven Eeckhout","doi":"10.1109/LCA.2025.3576891","DOIUrl":"https://doi.org/10.1109/LCA.2025.3576891","url":null,"abstract":"Computing devices are responsible for a significant fraction of the world’s total carbon footprint. Designing sustainable systems is a challenging endeavor because of the huge design space, the complex objective function, and the inherent data uncertainty. To make matters worse, a design that seems sustainable at first, might turn out to not be when taking rebound effects into account. In this paper, we propose the Architectural Sustainability Indicator (ASI), a novel metric to assess the sustainability of a given design and determine whether it is strongly, weakly, or unsustainable. ASI provides insight and hints for turning unsustainable and weakly sustainable design points into strongly sustainable ones that are robust against potential rebound effects. A case study illustrates how ASI steers Scalar Vector Runahead, a weakly sustainable hardware prefetching technique, into a strongly sustainable one while offering a 3.2× performance boost.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"205-208"},"PeriodicalIF":1.4,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28DOI: 10.1109/LCA.2025.3565199
Víctor Nicolás-Conesa;Rubén Titos-Gil;Ricardo Fernández-Pascual;Manuel E. Acacio;Alberto Ros
The simplicity of requester-wins has made it the preferred choice for conflict resolution in commercial implementations of Hardware Transactional Memory (HTM), which typically have relied on conventional locking to escape from conflict-induced livelocks. Prior work advocates for combining requester-wins and requester-loses to ensure progress for higher-priority transactions, yet it fails to take full advantage of the available features, namely, protocol support for nacks. This paper introduces WoperTM, a dual-policy, best-effort HTM design that resolves conflicts using requester-loses policy in the common case. Our key insight is that, since nacks are required to support priorities in HTM, performance can be improved at nearly no extra cost by allowing regular transactions to benefit from requester-loses, instead of only those involving a high-priority transaction. Experimental results using gem5 and STAMP show that WoperTM can significantly reduce squashed work and improve execution times by 12% with respect to power transactions, with negligible hardware overhead.
{"title":"WoperTM: Got Nacks? Use Them!","authors":"Víctor Nicolás-Conesa;Rubén Titos-Gil;Ricardo Fernández-Pascual;Manuel E. Acacio;Alberto Ros","doi":"10.1109/LCA.2025.3565199","DOIUrl":"https://doi.org/10.1109/LCA.2025.3565199","url":null,"abstract":"The simplicity of requester-wins has made it the preferred choice for conflict resolution in commercial implementations of Hardware Transactional Memory (HTM), which typically have relied on conventional locking to escape from conflict-induced livelocks. Prior work advocates for combining requester-wins and requester-loses to ensure progress for higher-priority transactions, yet it fails to take full advantage of the available features, namely, protocol support for <italic>nacks</i>. This paper introduces WoperTM, a dual-policy, best-effort HTM design that resolves conflicts using <italic>requester-loses</i> policy in the common case. Our key insight is that, since <italic>nacks</i> are required to support priorities in HTM, performance can be improved at nearly no extra cost by allowing regular transactions to benefit from requester-loses, instead of only those involving a high-priority transaction. Experimental results using gem5 and STAMP show that WoperTM can significantly reduce squashed work and improve execution times by 12% with respect to <italic>power transactions</i>, with negligible hardware overhead.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"157-160"},"PeriodicalIF":1.4,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-25DOI: 10.1109/LCA.2025.3564535
Arteen Abrishami;Zhengrong Wang;Tony Nowatzki
Vendors are increasingly adopting chiplet-based designs to manage cost for large-scale multi-cores. While near-data computing, a paradigm involving offloading computation near where data is located in memory, has been studied in the context of monolithic chip designs – its applications to chiplets remain unexplored. In this letter, we explore how the paradigm extends to chiplets in a system where computation is offloaded to accelerators collocated within the last-level-cache structure. We explore both shared and private last-level-cache designs across a variety of different workloads, both large-scale graph computations and more regular-access workloads, in order to understand how to optimize the cache and topology design for near-data workloads. We find that with a mesh chiplet architecture with shared last-level-cache (LLC), near-data optimization can achieve an 8.70× speedup on graph workloads, providing an even greater benefit than in traditional systems.
{"title":"Cache and Near-Data Co-Design for Chiplets","authors":"Arteen Abrishami;Zhengrong Wang;Tony Nowatzki","doi":"10.1109/LCA.2025.3564535","DOIUrl":"https://doi.org/10.1109/LCA.2025.3564535","url":null,"abstract":"Vendors are increasingly adopting chiplet-based designs to manage cost for large-scale multi-cores. While near-data computing, a paradigm involving offloading computation near where data is located in memory, has been studied in the context of monolithic chip designs – its applications to chiplets remain unexplored. In this letter, we explore how the paradigm extends to chiplets in a system where computation is offloaded to accelerators collocated within the last-level-cache structure. We explore both shared and private last-level-cache designs across a variety of different workloads, both large-scale graph computations and more regular-access workloads, in order to understand how to optimize the cache and topology design for near-data workloads. We find that with a mesh chiplet architecture with shared last-level-cache (LLC), near-data optimization can achieve an 8.70× speedup on graph workloads, providing an even greater benefit than in traditional systems.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"149-152"},"PeriodicalIF":1.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iterative linear solvers are a crucial kernel in many numerical analysis problems. The performance and energy efficiency of iterative solvers based on traditional architectures are severely constrained by the memory wall bottleneck. Computing-in-memory (CIM) has the potential to enhance solving efficiency. Existing CIM architectures are mostly customized for specific algorithms and primarily focus on handling fixed-point operations, which makes them difficult to meet the demands of diverse and high-precision applications. In this work, we propose a CIM architecture that natively supports various iterative linear solvers based on floating-point operations. We develop a new instruction set for the accelerator, which can be flexibly combined to implement various iterative solvers. The evaluation results show that, compared with the GPU implementation, our accelerator achieves more than 10.1× speedup and 6.8× energy savings when executing different iterative solvers.
{"title":"In-Memory Computing Accelerator for Iterative Linear Algebra Solvers","authors":"Rui Liu;Zerun Li;Xiaoyu Zhang;Xiaoming Chen;Yinhe Han;Minghua Tang","doi":"10.1109/LCA.2025.3563365","DOIUrl":"https://doi.org/10.1109/LCA.2025.3563365","url":null,"abstract":"Iterative linear solvers are a crucial kernel in many numerical analysis problems. The performance and energy efficiency of iterative solvers based on traditional architectures are severely constrained by the memory wall bottleneck. Computing-in-memory (CIM) has the potential to enhance solving efficiency. Existing CIM architectures are mostly customized for specific algorithms and primarily focus on handling fixed-point operations, which makes them difficult to meet the demands of diverse and high-precision applications. In this work, we propose a CIM architecture that natively supports various iterative linear solvers based on floating-point operations. We develop a new instruction set for the accelerator, which can be flexibly combined to implement various iterative solvers. The evaluation results show that, compared with the GPU implementation, our accelerator achieves more than 10.1× speedup and 6.8× energy savings when executing different iterative solvers.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"161-164"},"PeriodicalIF":1.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-21DOI: 10.1109/LCA.2025.3563105
Aalaa M.A. Babai;Koji Inoue
Low-power volatile FPGAs (VFPGAs) naturally meet the intertwined processing and flexibility demands of IoT devices. However, as IoT devices shift toward Energy Harvesting (EH) for self-sustained operation, VFPGAs are overlooked because they struggle under harvested power. Their volatile SRAM configuration memory cells frequently lose their data, causing high reconfiguration penalties. These penalties grow with FPGAs’ resource usage, limiting it under EH. Still, advances in low-power FPGAs and energy-buffering systems’ efficiency motivate us to explore EH-powered FPGAs. Thus, we analyze the interplay of their resources, performance, and reconfiguration; simulate their operation under different EH conditions; and show how they can be utilized up to an application- and EH-dependent threshold.
{"title":"Exploring Volatile FPGAs Potential for Accelerating Energy-Harvesting IoT Applications","authors":"Aalaa M.A. Babai;Koji Inoue","doi":"10.1109/LCA.2025.3563105","DOIUrl":"https://doi.org/10.1109/LCA.2025.3563105","url":null,"abstract":"Low-power volatile FPGAs (VFPGAs) naturally meet the intertwined processing and flexibility demands of IoT devices. However, as IoT devices shift toward Energy Harvesting (EH) for self-sustained operation, VFPGAs are overlooked because they struggle under harvested power. Their volatile SRAM configuration memory cells frequently lose their data, causing high reconfiguration penalties. These penalties grow with FPGAs’ resource usage, limiting it under EH. Still, advances in low-power FPGAs and energy-buffering systems’ efficiency motivate us to explore EH-powered FPGAs. Thus, we analyze the interplay of their resources, performance, and reconfiguration; simulate their operation under different EH conditions; and show how they can be utilized up to an application- and EH-dependent threshold.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"137-140"},"PeriodicalIF":1.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-18DOI: 10.1109/LCA.2025.3562431
Shunchen Shi;Fan Yang;Zhichun Li;Xueqi Li;Ninghui Sun
Time series analysis (TSA) is an important technique for extracting information from domain data. TSA is memory-bound on conventional platforms due to excessive off-chip data movements between processing units and the main memory of the system. Processing in memory (PIM) is a paradigm that alleviates the bottleneck of memory access for data-intensive applications by enabling computation to be performed directly within memory. In this paper, we first perform profiling to characterize TSA on conventional CPUs. Then, we implement TSA on real-world commercial DRAM Dual-Inline Memory Module (DIMM) PIM hardware UPMEM and identify computation as the primary bottleneck on PIM. Finally, we evaluate the impact of enhancing the computational capability of current DIMM PIM hardware on accelerating TSA. Overall, our work provides insights for designing the optimized DIMM PIM architecture for high-performance and efficient time series analysis.
{"title":"Exploring the DIMM PIM Architecture for Accelerating Time Series Analysis","authors":"Shunchen Shi;Fan Yang;Zhichun Li;Xueqi Li;Ninghui Sun","doi":"10.1109/LCA.2025.3562431","DOIUrl":"https://doi.org/10.1109/LCA.2025.3562431","url":null,"abstract":"Time series analysis (TSA) is an important technique for extracting information from domain data. TSA is memory-bound on conventional platforms due to excessive off-chip data movements between processing units and the main memory of the system. Processing in memory (PIM) is a paradigm that alleviates the bottleneck of memory access for data-intensive applications by enabling computation to be performed directly within memory. In this paper, we first perform profiling to characterize TSA on conventional CPUs. Then, we implement TSA on real-world commercial DRAM Dual-Inline Memory Module (DIMM) PIM hardware UPMEM and identify computation as the primary bottleneck on PIM. Finally, we evaluate the impact of enhancing the computational capability of current DIMM PIM hardware on accelerating TSA. Overall, our work provides insights for designing the optimized DIMM PIM architecture for high-performance and efficient time series analysis.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"169-172"},"PeriodicalIF":1.4,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sparse matrix-vector multiplication (SpMV) is a critical operation across numerous application domains. As a memory-bound kernel, SpMV does not require a complex compute engine but still needs efficient use of available compute units to achieve peak performance efficiently. However, sparsity causes resource underutilization. To efficiently run SpMV, we propose Segin that leverages a novel fine-grained multi-tenancy, allowing multiple SpMV operations to be executed simultaneously on a single hardware with minimal modifications, which in turn improves throughput. To achieve this, Segin employs hierarchical bitmaps, hence a lightweight logical circuit, to quickly and efficiently identify optimal pairs of sparse matrices to overlap. Our evaluations demonstrate that Segin can improve throughput by 1.92×, while enhancing resource utilization.
{"title":"Segin: Synergistically Enabling Fine-Grained Multi-Tenant and Resource Optimized SpMV","authors":"Helya Hosseini;Ubaid Bakhtiar;Donghyeon Joo;Bahar Asgari","doi":"10.1109/LCA.2025.3562120","DOIUrl":"https://doi.org/10.1109/LCA.2025.3562120","url":null,"abstract":"Sparse matrix-vector multiplication (SpMV) is a critical operation across numerous application domains. As a memory-bound kernel, SpMV does not require a complex compute engine but still needs efficient use of available compute units to achieve peak performance efficiently. However, sparsity causes resource underutilization. To efficiently run SpMV, we propose Segin that leverages a novel <italic>fine-grained multi-tenancy</i>, allowing multiple SpMV operations to be executed simultaneously on a single hardware with minimal modifications, which in turn improves throughput. To achieve this, Segin employs hierarchical bitmaps, hence a lightweight logical circuit, to quickly and efficiently identify optimal pairs of sparse matrices to overlap. Our evaluations demonstrate that Segin can improve throughput by 1.92×, while enhancing resource utilization.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"181-184"},"PeriodicalIF":1.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-15DOI: 10.1109/LCA.2025.3560786
Daeun Kim;Jinwoo Hwang;Changhun Oh;Jongse Park
Diffusion Transformer (DiT) has driven significant progress in image generation tasks. However, DiT inferencing is notoriously compute-intensive and incurs long latency even on datacenter-scale GPUs, primarily due to its iterative nature and heavy reliance on GEMM operations inherent to its encoder-based structure. To address the challenge, prior work has explored quantization, but achieving low-precision quantization for DiT inferencing with both high accuracy and substantial speedup remains an open problem. To this end, this paper proposes MixDiT, an algorithm-hardware co-designed acceleration solution that exploits mixed Microscaling (MX) formats to quantize DiT activation values. MixDiTquantizes the DiT activation tensors by selectively applying higher precision to magnitude-based outliers, which produce mixed-precision GEMM operations. To achieve tangible speedup from the mixed-precision arithmetic, we design a MixDiTaccelerator that enables precision-flexible multiplications and efficient MX precision conversions. Our experimental results show that MixDiTdelivers a speedup of 2.10–5.32× over RTX 3090, with no loss in FID.
{"title":"MixDiT: Accelerating Image Diffusion Transformer Inference With Mixed-Precision MX Quantization","authors":"Daeun Kim;Jinwoo Hwang;Changhun Oh;Jongse Park","doi":"10.1109/LCA.2025.3560786","DOIUrl":"https://doi.org/10.1109/LCA.2025.3560786","url":null,"abstract":"<underline>Di</u>ffusion <underline>T</u>ransformer (DiT) has driven significant progress in image generation tasks. However, DiT inferencing is notoriously compute-intensive and incurs long latency even on datacenter-scale GPUs, primarily due to its iterative nature and heavy reliance on GEMM operations inherent to its encoder-based structure. To address the challenge, prior work has explored quantization, but achieving low-precision quantization for DiT inferencing with both high accuracy and substantial speedup remains an open problem. To this end, this paper proposes MixDiT, an algorithm-hardware co-designed acceleration solution that exploits mixed Microscaling (MX) formats to quantize DiT activation values. MixDiTquantizes the DiT activation tensors by selectively applying higher precision to magnitude-based outliers, which produce mixed-precision GEMM operations. To achieve tangible speedup from the mixed-precision arithmetic, we design a MixDiTaccelerator that enables precision-flexible multiplications and efficient MX precision conversions. Our experimental results show that MixDiTdelivers a speedup of 2.10–5.32× over RTX 3090, with no loss in FID.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 1","pages":"141-144"},"PeriodicalIF":1.4,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}