{"title":"A Design Flow for Scheduling Spiking Deep Convolutional Neural Networks on Heterogeneous Neuromorphic System-on-Chip","authors":"Anup Das","doi":"10.1145/3635032","DOIUrl":null,"url":null,"abstract":"<p>Neuromorphic systems-on-chip (NSoCs) integrate CPU cores and neuromorphic hardware accelerators on the same chip. These platforms can execute spiking deep convolutional neural networks (SDCNNs) with a low energy footprint. Modern NSoCs are heterogeneous in terms of their computing, communication, and storage resources. This makes scheduling SDCNN operations a combinatorial problem of exploring an exponentially-large state space in determining mapping, ordering, and timing of operations to achieve a target hardware performance, e.g., throughput. </p><p>We propose a systematic design flow to schedule SDCNNs on an NSoC. Our scheduler, called SMART (<underline>S</underline>DCNN <underline>MA</underline>pping, Orde<underline>R</underline>ing, and <underline>T</underline>iming), branches the combinatorial optimization problem into computationally-relaxed sub-problems that generate fast solutions without significantly compromising the solution quality. SMART improves performance by efficiently incorporating the heterogeneity in computing, communication, and storage resources. SMART operates in four steps. First, it creates a self-timed execution schedule to map operations to compute resources, maximizing throughput. Second, it uses an optimization strategy to distribute activation and synaptic weights to storage resources, minimizing data communication-related overhead. Third, it constructs an inter-processor communication (IPC) graph with a transaction order for its communication actors. This transaction order is created using a transaction partial order algorithm, which minimizes contention on the shared communication resources. Finally, it schedules this IPC graph to hardware by overlapping communication with the computation, and leveraging operation, pipeline, and batch parallelism. </p><p>We evaluate SMART using 10 representative image, object, and language-based SDCNNs. Results show that SMART increases throughput by an average 23%, compared to a state-of-the-art scheduler. SMART is implemented entirely in software as a compiler extension. It doesn’t require any change in a neuromorphic hardware or its interface to CPUs. It improves throughput with only a marginal increase in the compilation time. SMART is released under the open-source MIT licensing at https://github.com/drexel-DISCO/SMART\nto foster future research.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"47 6","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3635032","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Neuromorphic systems-on-chip (NSoCs) integrate CPU cores and neuromorphic hardware accelerators on the same chip. These platforms can execute spiking deep convolutional neural networks (SDCNNs) with a low energy footprint. Modern NSoCs are heterogeneous in terms of their computing, communication, and storage resources. This makes scheduling SDCNN operations a combinatorial problem of exploring an exponentially-large state space in determining mapping, ordering, and timing of operations to achieve a target hardware performance, e.g., throughput.
We propose a systematic design flow to schedule SDCNNs on an NSoC. Our scheduler, called SMART (SDCNN MApping, OrdeRing, and Timing), branches the combinatorial optimization problem into computationally-relaxed sub-problems that generate fast solutions without significantly compromising the solution quality. SMART improves performance by efficiently incorporating the heterogeneity in computing, communication, and storage resources. SMART operates in four steps. First, it creates a self-timed execution schedule to map operations to compute resources, maximizing throughput. Second, it uses an optimization strategy to distribute activation and synaptic weights to storage resources, minimizing data communication-related overhead. Third, it constructs an inter-processor communication (IPC) graph with a transaction order for its communication actors. This transaction order is created using a transaction partial order algorithm, which minimizes contention on the shared communication resources. Finally, it schedules this IPC graph to hardware by overlapping communication with the computation, and leveraging operation, pipeline, and batch parallelism.
We evaluate SMART using 10 representative image, object, and language-based SDCNNs. Results show that SMART increases throughput by an average 23%, compared to a state-of-the-art scheduler. SMART is implemented entirely in software as a compiler extension. It doesn’t require any change in a neuromorphic hardware or its interface to CPUs. It improves throughput with only a marginal increase in the compilation time. SMART is released under the open-source MIT licensing at https://github.com/drexel-DISCO/SMART
to foster future research.
期刊介绍:
The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.