N. Goux, Jean-Baptiste Casanova, G. Pillonnet, F. Badets
This paper presents the architecture and ultra-low power (ULP) implementation of a switched-capacitor injection-locked oscillator (SC-ILO) used as analog-to-time converter for wake-up sensor applications. Thanks to a novel injection-locking scheme based on switched capacitors, the SC-ILO architecture avoids the use of power-hungry constant injection current sources. The SC-ILO design parameters and transfer function, resulting from an analytical study, are determined, and used to optimize the design. The ULP implementation strategy regarding power consumption, gain, modulation bandwidth and output phase dynamic range is presented and optimized to be compatible with audio wake-up sensor application that require ultra-low power consumption but low dynamic range performances. This paper reports SC-ILO circuit experimental measurements, fabricated in a 22 nm FDSOI process. The measured chip exhibits a 129° phase-shift range, a 6kHz bandwidth leading to a 34.6dB-dynamic range for a power consumption of 640pW under 0.4V.
{"title":"A 640pW 32kHz switched-capacitor ILO analog-to-time converter for wake-up sensor applications","authors":"N. Goux, Jean-Baptiste Casanova, G. Pillonnet, F. Badets","doi":"10.1145/3370748.3406582","DOIUrl":"https://doi.org/10.1145/3370748.3406582","url":null,"abstract":"This paper presents the architecture and ultra-low power (ULP) implementation of a switched-capacitor injection-locked oscillator (SC-ILO) used as analog-to-time converter for wake-up sensor applications. Thanks to a novel injection-locking scheme based on switched capacitors, the SC-ILO architecture avoids the use of power-hungry constant injection current sources. The SC-ILO design parameters and transfer function, resulting from an analytical study, are determined, and used to optimize the design. The ULP implementation strategy regarding power consumption, gain, modulation bandwidth and output phase dynamic range is presented and optimized to be compatible with audio wake-up sensor application that require ultra-low power consumption but low dynamic range performances. This paper reports SC-ILO circuit experimental measurements, fabricated in a 22 nm FDSOI process. The measured chip exhibits a 129° phase-shift range, a 6kHz bandwidth leading to a 34.6dB-dynamic range for a power consumption of 640pW under 0.4V.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114828387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The service quality of web search depends considerably on the request tail latency from Index Serving Nodes (ISNs), prompting data centers to operate them at low utilization and wasting server power. ISNs can be made more energy efficient utilizing Dynamic Voltage and Frequency Scaling (DVFS) or sleep states techniques to take advantage of slack in latency of search queries. However, state-of-the-art frameworks use a single distribution to predict a request's service time and select a high percentile tail latency to derive the CPU's frequency or sleep states. Unfortunately, this misses plenty of energy saving opportunities. In this paper, we develop a simple linear regression predictor to estimate each individual search request's service time, based on the length of the request's posting list. To use this prediction for power management, the major challenge lies in reducing miss rates for deadlines due to prediction errors, while improving energy efficiency. We present Swan, a two-Step poWer mAnagement for distributed search eNgines. For each request, Swan selects an initial, lower frequency to optimize power, and then appropriately boosts the CPU frequency just at the right time to meet the deadline. Additionally, we re-configure the time instant for boosting frequency, when a critical request arrives and avoid deadline violations. Swan is implemented on the widely-used Solr search engine and evaluated with two representative, large query traces. Evaluations show Swan outperforms state-of-the-art approaches, saving at least 39% CPU power on average.
{"title":"Swan: a two-step power management for distributed search engines","authors":"Liang Zhou, L. Bhuyan, Kadangode K. Ramakrishnan","doi":"10.1145/3370748.3406573","DOIUrl":"https://doi.org/10.1145/3370748.3406573","url":null,"abstract":"The service quality of web search depends considerably on the request tail latency from Index Serving Nodes (ISNs), prompting data centers to operate them at low utilization and wasting server power. ISNs can be made more energy efficient utilizing Dynamic Voltage and Frequency Scaling (DVFS) or sleep states techniques to take advantage of slack in latency of search queries. However, state-of-the-art frameworks use a single distribution to predict a request's service time and select a high percentile tail latency to derive the CPU's frequency or sleep states. Unfortunately, this misses plenty of energy saving opportunities. In this paper, we develop a simple linear regression predictor to estimate each individual search request's service time, based on the length of the request's posting list. To use this prediction for power management, the major challenge lies in reducing miss rates for deadlines due to prediction errors, while improving energy efficiency. We present Swan, a two-Step poWer mAnagement for distributed search eNgines. For each request, Swan selects an initial, lower frequency to optimize power, and then appropriately boosts the CPU frequency just at the right time to meet the deadline. Additionally, we re-configure the time instant for boosting frequency, when a critical request arrives and avoid deadline violations. Swan is implemented on the widely-used Solr search engine and evaluated with two representative, large query traces. Evaluations show Swan outperforms state-of-the-art approaches, saving at least 39% CPU power on average.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130002114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yixiong Yang, Zhe Yuan, Fang Su, Fanyang Cheng, Zhuqing Yuan, Huazhong Yang, Yongpan Liu
Activation I/O traffic is a critical bottleneck of video neural network processor. Recent works adopted an inter-frame difference method to reduce activation size. However, current methods can't fully adapt to the various precision and sparsity in differential data. In this paper, we propose the multi-channel precision-sparsity-adapted codec, which will separate the differential activation and encode activation in multiple channels. We analyze the most adapted encoding of each channel, and select the optimal channel number with the best performance. A two-channel codec hardware has been implemented in the ASIC accelerator, which can encode/decode activations in parallel. Experiment results show that our coding achieves 2.2x-18.2x compression rate in three scenarios with no accuracy loss, and the hardware has 42x/174x improvement on speed and energy-efficiency compared with the software codec.
{"title":"Multi-channel precision-sparsity-adapted inter-frame differential data codec for video neural network processor","authors":"Yixiong Yang, Zhe Yuan, Fang Su, Fanyang Cheng, Zhuqing Yuan, Huazhong Yang, Yongpan Liu","doi":"10.1145/3370748.3407002","DOIUrl":"https://doi.org/10.1145/3370748.3407002","url":null,"abstract":"Activation I/O traffic is a critical bottleneck of video neural network processor. Recent works adopted an inter-frame difference method to reduce activation size. However, current methods can't fully adapt to the various precision and sparsity in differential data. In this paper, we propose the multi-channel precision-sparsity-adapted codec, which will separate the differential activation and encode activation in multiple channels. We analyze the most adapted encoding of each channel, and select the optimal channel number with the best performance. A two-channel codec hardware has been implemented in the ASIC accelerator, which can encode/decode activations in parallel. Experiment results show that our coding achieves 2.2x-18.2x compression rate in three scenarios with no accuracy loss, and the hardware has 42x/174x improvement on speed and energy-efficiency compared with the software codec.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132182978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a multi-core system, communication across cores is managed by an on-chip interconnect called Network-on-Chip (NoC). The utilization of NoC results in limitations such as high communication delay and high network power consumption. The buffers of the NoC router consume a considerable amount of leakage power. This paper attempts to reduce leakage power consumption by using Non-Volatile Memory technology-based buffers. NVM technology has the advantage of higher density and low leakage but suffers from costly write operation, and weaker write endurance. These characteristics impact on the total network power consumption, network latency, and lifetime of the router as a whole. In this paper, we propose a write reduction technique, which is based on dirty flits present in write-back data packets. The method also suggests a dirty flit based Virtual Channel (VC) allocation technique that distributes writes in NVM technology-based VCs to improve the lifetime of NVM buffers. The experimental evaluation on the full system simulator shows that the proposed policy obtains a 53% reduction in write-back flits, which results in 27% lesser total network flit on average. All these results in a significant decrease in total and dynamic network power consumption. The policy also shows remarkable improvement in the lifetime.
{"title":"DidaSel","authors":"Khushboo Rani, Sukarn Agarwal, H. Kapoor","doi":"10.1145/3370748.3406565","DOIUrl":"https://doi.org/10.1145/3370748.3406565","url":null,"abstract":"In a multi-core system, communication across cores is managed by an on-chip interconnect called Network-on-Chip (NoC). The utilization of NoC results in limitations such as high communication delay and high network power consumption. The buffers of the NoC router consume a considerable amount of leakage power. This paper attempts to reduce leakage power consumption by using Non-Volatile Memory technology-based buffers. NVM technology has the advantage of higher density and low leakage but suffers from costly write operation, and weaker write endurance. These characteristics impact on the total network power consumption, network latency, and lifetime of the router as a whole. In this paper, we propose a write reduction technique, which is based on dirty flits present in write-back data packets. The method also suggests a dirty flit based Virtual Channel (VC) allocation technique that distributes writes in NVM technology-based VCs to improve the lifetime of NVM buffers. The experimental evaluation on the full system simulator shows that the proposed policy obtains a 53% reduction in write-back flits, which results in 27% lesser total network flit on average. All these results in a significant decrease in total and dynamic network power consumption. The policy also shows remarkable improvement in the lifetime.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125133122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resistive random-access memory (ReRAM) based accelerators have been widely studied to achieve efficient neural network computing in speed and energy. Neural network optimization algorithms such as sparsity are developed to achieve efficient neural network computing on traditional computer architectures such as CPU and GPU. However, such computing efficiency improvement is hindered when deploying these algorithms on the ReRAM-based accelerator because of its unique crossbar-structural computations. And a specific algorithm and hardware co-optimization for the ReRAM-based architecture is still in a lack. In this work, we propose an efficient neural network computing framework that is specialized for the crossbar-structural computations on the ReRAM-based accelerators. The proposed framework includes a crossbar specific feature map pruning and an adaptive neural network deployment. Experimental results show our design can improve the computing accuracy by 9.1% compared with the state-of-the-art sparse neural networks. Based on a famous ReRAM-based DNN accelerator, the proposed framework demonstrates up to 1.4× speedup, 4.3× power efficiency, and 4.4× area saving.
{"title":"Enabling efficient ReRAM-based neural network computing via crossbar structure adaptive optimization","authors":"Chenchen Liu, Fuxun Yu, Zhuwei Qin, Xiang Chen","doi":"10.1145/3370748.3406581","DOIUrl":"https://doi.org/10.1145/3370748.3406581","url":null,"abstract":"Resistive random-access memory (ReRAM) based accelerators have been widely studied to achieve efficient neural network computing in speed and energy. Neural network optimization algorithms such as sparsity are developed to achieve efficient neural network computing on traditional computer architectures such as CPU and GPU. However, such computing efficiency improvement is hindered when deploying these algorithms on the ReRAM-based accelerator because of its unique crossbar-structural computations. And a specific algorithm and hardware co-optimization for the ReRAM-based architecture is still in a lack. In this work, we propose an efficient neural network computing framework that is specialized for the crossbar-structural computations on the ReRAM-based accelerators. The proposed framework includes a crossbar specific feature map pruning and an adaptive neural network deployment. Experimental results show our design can improve the computing accuracy by 9.1% compared with the state-of-the-art sparse neural networks. Based on a famous ReRAM-based DNN accelerator, the proposed framework demonstrates up to 1.4× speedup, 4.3× power efficiency, and 4.4× area saving.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127106744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rishika Agarwala, Peng Wang, Akhilesh Tanneeru, Bongmook Lee, V. Misra, B. Calhoun
This paper presents a low power resistive sensor interface IC designed at 0.6V for ozone pollutant sensing. The large resistance range of gas sensors poses challenges in designing a low power sensor interface. Exiting architectures are insufficient for achieving a high dynamic range while enabling low VDD operation, resulting in high power consumption regardless of the adopted architecture. We present an adaptive architecture that provides baseline resistance cancellation and dynamic current control to enable low VDD operation while maintaining a dynamic range of 159dB across 20kΩ-1MΩ. The sensor interface IC is fabricated in a 65nm bulk CMOS process and consumes 88.6nW of power which is 300x lower than the state-of-art. The full system power ranges between 116 nW - 1.09 μW which includes the proposed sensor interface IC, analog to digital converter and peripheral circuits. The sensor interface's performance was verified using custom resistive metal-oxide sensors for ozone concentrations from 50 ppb to 900 ppb.
{"title":"An 88.6nW ozone pollutant sensing interface IC with a 159 dB dynamic range","authors":"Rishika Agarwala, Peng Wang, Akhilesh Tanneeru, Bongmook Lee, V. Misra, B. Calhoun","doi":"10.1145/3370748.3406579","DOIUrl":"https://doi.org/10.1145/3370748.3406579","url":null,"abstract":"This paper presents a low power resistive sensor interface IC designed at 0.6V for ozone pollutant sensing. The large resistance range of gas sensors poses challenges in designing a low power sensor interface. Exiting architectures are insufficient for achieving a high dynamic range while enabling low VDD operation, resulting in high power consumption regardless of the adopted architecture. We present an adaptive architecture that provides baseline resistance cancellation and dynamic current control to enable low VDD operation while maintaining a dynamic range of 159dB across 20kΩ-1MΩ. The sensor interface IC is fabricated in a 65nm bulk CMOS process and consumes 88.6nW of power which is 300x lower than the state-of-art. The full system power ranges between 116 nW - 1.09 μW which includes the proposed sensor interface IC, analog to digital converter and peripheral circuits. The sensor interface's performance was verified using custom resistive metal-oxide sensors for ozone concentrations from 50 ppb to 900 ppb.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116694190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miniaturized fluorescent calcium imaging microscopes are widely used for monitoring the activity of a large population of neurons in freely behaving animals in vivo. Conventional calcium image analyses extract calcium traces by iterative and bulk image processing and they are hard to meet the power and latency requirements for neurofeedback devices. In this paper, we propose the calcium image processing pipeline based on a bit-sparse long short-term memory (LSTM) inference kernel (BLINK) for efficient calcium trace extraction. It largely reduces the power and latency while remaining the trace extraction accuracy. We implemented the customized pipeline on the Ultra96 platform. It can extract calcium traces from up to 1024 cells with sub-ms latency on a single FPGA device. We designed the BLINK circuits in a 28-nm technology. Evaluation shows that the proposed bit-sparse representation can reduce the circuit area by 38.7% and save the power consumption by 38.4% without accuracy loss. The BLINK circuits achieve 410 pJ/inference, which has 6293x and 52.4x gains in energy efficiency compared to the evaluation on the high performance CPU and GPU, respectively.
{"title":"BLINK: bit-sparse LSTM inference kernel enabling efficient calcium trace extraction for neurofeedback devices","authors":"Zhe Chen, Garrett J. Blair, H. T. Blair, J. Cong","doi":"10.1145/3370748.3406552","DOIUrl":"https://doi.org/10.1145/3370748.3406552","url":null,"abstract":"Miniaturized fluorescent calcium imaging microscopes are widely used for monitoring the activity of a large population of neurons in freely behaving animals in vivo. Conventional calcium image analyses extract calcium traces by iterative and bulk image processing and they are hard to meet the power and latency requirements for neurofeedback devices. In this paper, we propose the calcium image processing pipeline based on a bit-sparse long short-term memory (LSTM) inference kernel (BLINK) for efficient calcium trace extraction. It largely reduces the power and latency while remaining the trace extraction accuracy. We implemented the customized pipeline on the Ultra96 platform. It can extract calcium traces from up to 1024 cells with sub-ms latency on a single FPGA device. We designed the BLINK circuits in a 28-nm technology. Evaluation shows that the proposed bit-sparse representation can reduce the circuit area by 38.7% and save the power consumption by 38.4% without accuracy loss. The BLINK circuits achieve 410 pJ/inference, which has 6293x and 52.4x gains in energy efficiency compared to the evaluation on the high performance CPU and GPU, respectively.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116395253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qin Li, Sheng Lin, Changlu Liu, Yidong Liu, F. Qiao, Yanzhi Wang, Huazhong Yang
Keyword spotting (KWS) is a crucial front-end module in the whole speech interaction system. The always-on KWS module detects input words, then activates the energy-consuming complex backend system when keywords are detected. The performance of the KWS determines the standby performance of the whole system and the conventional KWS module encounters the power consumption bottleneck problem of the data conversion near the microphone sensor. In this paper, we propose an energy-efficient near-sensor processing architecture for always-on KWS, which could enhance continuous perception of the whole speech interaction system. By implementing the keyword detection in the analog domain after the microphone sensor, this architecture avoids energy-consuming data converter and achieves faster speed than conventional realizations. In addition, we propose a lightweight gated recurrent unit (GRU) with negligible accuracy loss to ensure the recognition performance. We also implement and fabricate the proposed KWS system with the CMOS 0.18μm process. In the system-view evaluation results, the hardware-software co-design architecture achieves 65.6% energy consumption saving and 71 times speed up than state of the art.
{"title":"NS-KWS: joint optimization of near-sensor processing architecture and low-precision GRU for always-on keyword spotting","authors":"Qin Li, Sheng Lin, Changlu Liu, Yidong Liu, F. Qiao, Yanzhi Wang, Huazhong Yang","doi":"10.1145/3370748.3407001","DOIUrl":"https://doi.org/10.1145/3370748.3407001","url":null,"abstract":"Keyword spotting (KWS) is a crucial front-end module in the whole speech interaction system. The always-on KWS module detects input words, then activates the energy-consuming complex backend system when keywords are detected. The performance of the KWS determines the standby performance of the whole system and the conventional KWS module encounters the power consumption bottleneck problem of the data conversion near the microphone sensor. In this paper, we propose an energy-efficient near-sensor processing architecture for always-on KWS, which could enhance continuous perception of the whole speech interaction system. By implementing the keyword detection in the analog domain after the microphone sensor, this architecture avoids energy-consuming data converter and achieves faster speed than conventional realizations. In addition, we propose a lightweight gated recurrent unit (GRU) with negligible accuracy loss to ensure the recognition performance. We also implement and fabricate the proposed KWS system with the CMOS 0.18μm process. In the system-view evaluation results, the hardware-software co-design architecture achieves 65.6% energy consumption saving and 71 times speed up than state of the art.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114476933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Behnam Khaleghi, Sahand Salamat, Anthony Thomas, Fatemeh Asgarinejad, Yeseong Kim, Tajana Rosing
Hyperdimensional computing (HD) is an emerging paradigm for machine learning based on the evidence that the brain computes on high-dimensional, distributed, representations of data. The main operation of HD is encoding, which transfers the input data to hyperspace by mapping each input feature to a hypervector, followed by a bundling procedure that adds up the hypervectors to realize the encoding hypervector. The operations of HD are simple and highly parallelizable, but the large number of operations hampers the efficiency of HD in embedded domain. In this paper, we propose SHEARer, an algorithmhardware co-optimization to improve the performance and energy consumption of HD computing. We gain insight from a prudent scheme of approximating the hypervectors that, thanks to error resiliency of HD, has minimal impact on accuracy while provides high prospect for hardware optimization. Unlike previous works that generate the encoding hypervectors in full precision and then and then perform ex-post quantization, we compute the encoding hypervectors in an approximate manner that saves resources yet affords high accuracy. We also propose a novel FPGA architecture that achieves striking performance through massive parallelism with low power consumption. Moreover, we develop a software framework that enables training HD models by emulating the proposed approximate encodings. The FPGA implementation of SHEARer achieves an average throughput boost of 104,904× (15.7×) and energy savings of up to 56,044× (301×) compared to state-of-the-art encoding methods implemented on Raspberry Pi 3 (GeForce GTX 1080 Ti) using practical machine learning datasets.
{"title":"SHEAR\u0000 er","authors":"Behnam Khaleghi, Sahand Salamat, Anthony Thomas, Fatemeh Asgarinejad, Yeseong Kim, Tajana Rosing","doi":"10.1145/3370748.3406587","DOIUrl":"https://doi.org/10.1145/3370748.3406587","url":null,"abstract":"Hyperdimensional computing (HD) is an emerging paradigm for machine learning based on the evidence that the brain computes on high-dimensional, distributed, representations of data. The main operation of HD is encoding, which transfers the input data to hyperspace by mapping each input feature to a hypervector, followed by a bundling procedure that adds up the hypervectors to realize the encoding hypervector. The operations of HD are simple and highly parallelizable, but the large number of operations hampers the efficiency of HD in embedded domain. In this paper, we propose SHEARer, an algorithmhardware co-optimization to improve the performance and energy consumption of HD computing. We gain insight from a prudent scheme of approximating the hypervectors that, thanks to error resiliency of HD, has minimal impact on accuracy while provides high prospect for hardware optimization. Unlike previous works that generate the encoding hypervectors in full precision and then and then perform ex-post quantization, we compute the encoding hypervectors in an approximate manner that saves resources yet affords high accuracy. We also propose a novel FPGA architecture that achieves striking performance through massive parallelism with low power consumption. Moreover, we develop a software framework that enables training HD models by emulating the proposed approximate encodings. The FPGA implementation of SHEARer achieves an average throughput boost of 104,904× (15.7×) and energy savings of up to 56,044× (301×) compared to state-of-the-art encoding methods implemented on Raspberry Pi 3 (GeForce GTX 1080 Ti) using practical machine learning datasets.","PeriodicalId":116486,"journal":{"name":"Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123791725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}