Orian Leitersdorf, Yahav Boneh, Gonen Gazit, Ronny Ronen, Shahar Kvatinsky
{"title":"FourierPIM:高吞吐量内存快速傅立叶变换和多项式乘法","authors":"Orian Leitersdorf, Yahav Boneh, Gonen Gazit, Ronny Ronen, Shahar Kvatinsky","doi":"10.1016/j.memori.2023.100034","DOIUrl":null,"url":null,"abstract":"<div><p>The Discrete Fourier Transform (DFT) is essential for various applications ranging from signal processing to convolution and polynomial multiplication. The groundbreaking Fast Fourier Transform (FFT) algorithm reduces DFT time complexity from the naive <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> to <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>n</mi><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span>, and recent works have sought further acceleration through parallel architectures such as GPUs. Unfortunately, accelerators such as GPUs cannot exploit their full computing capabilities since memory access becomes the bottleneck. Therefore, this paper accelerates the FFT algorithm using digital Processing-in-Memory (PIM) architectures that shift computation into the memory by exploiting physical devices capable of both storage and logic (e.g., memristors). We propose an <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span> in-memory FFT algorithm that can also be performed in parallel across multiple arrays for <em>high-throughput batched execution</em>, supporting both fixed-point and floating-point numbers. Through the convolution theorem, we extend this algorithm to <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span> polynomial multiplication – a fundamental task for applications such as cryptography. We evaluate FourierPIM on a publicly-available cycle-accurate simulator that verifies both correctness and performance, and demonstrate <span><math><mrow><mtext>5–15</mtext><mo>×</mo></mrow></math></span> throughput and <span><math><mrow><mtext>4–13</mtext><mo>×</mo></mrow></math></span> energy improvement over the NVIDIA cuFFT library on state-of-the-art GPUs for FFT and polynomial multiplication.</p></div>","PeriodicalId":100915,"journal":{"name":"Memories - Materials, Devices, Circuits and Systems","volume":"4 ","pages":"Article 100034"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"FourierPIM: High-throughput in-memory Fast Fourier Transform and polynomial multiplication\",\"authors\":\"Orian Leitersdorf, Yahav Boneh, Gonen Gazit, Ronny Ronen, Shahar Kvatinsky\",\"doi\":\"10.1016/j.memori.2023.100034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The Discrete Fourier Transform (DFT) is essential for various applications ranging from signal processing to convolution and polynomial multiplication. The groundbreaking Fast Fourier Transform (FFT) algorithm reduces DFT time complexity from the naive <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> to <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>n</mi><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span>, and recent works have sought further acceleration through parallel architectures such as GPUs. Unfortunately, accelerators such as GPUs cannot exploit their full computing capabilities since memory access becomes the bottleneck. Therefore, this paper accelerates the FFT algorithm using digital Processing-in-Memory (PIM) architectures that shift computation into the memory by exploiting physical devices capable of both storage and logic (e.g., memristors). We propose an <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span> in-memory FFT algorithm that can also be performed in parallel across multiple arrays for <em>high-throughput batched execution</em>, supporting both fixed-point and floating-point numbers. Through the convolution theorem, we extend this algorithm to <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span> polynomial multiplication – a fundamental task for applications such as cryptography. We evaluate FourierPIM on a publicly-available cycle-accurate simulator that verifies both correctness and performance, and demonstrate <span><math><mrow><mtext>5–15</mtext><mo>×</mo></mrow></math></span> throughput and <span><math><mrow><mtext>4–13</mtext><mo>×</mo></mrow></math></span> energy improvement over the NVIDIA cuFFT library on state-of-the-art GPUs for FFT and polynomial multiplication.</p></div>\",\"PeriodicalId\":100915,\"journal\":{\"name\":\"Memories - Materials, Devices, Circuits and Systems\",\"volume\":\"4 \",\"pages\":\"Article 100034\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Memories - Materials, Devices, Circuits and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2773064623000117\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Memories - Materials, Devices, Circuits and Systems","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2773064623000117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FourierPIM: High-throughput in-memory Fast Fourier Transform and polynomial multiplication
The Discrete Fourier Transform (DFT) is essential for various applications ranging from signal processing to convolution and polynomial multiplication. The groundbreaking Fast Fourier Transform (FFT) algorithm reduces DFT time complexity from the naive to , and recent works have sought further acceleration through parallel architectures such as GPUs. Unfortunately, accelerators such as GPUs cannot exploit their full computing capabilities since memory access becomes the bottleneck. Therefore, this paper accelerates the FFT algorithm using digital Processing-in-Memory (PIM) architectures that shift computation into the memory by exploiting physical devices capable of both storage and logic (e.g., memristors). We propose an in-memory FFT algorithm that can also be performed in parallel across multiple arrays for high-throughput batched execution, supporting both fixed-point and floating-point numbers. Through the convolution theorem, we extend this algorithm to polynomial multiplication – a fundamental task for applications such as cryptography. We evaluate FourierPIM on a publicly-available cycle-accurate simulator that verifies both correctness and performance, and demonstrate throughput and energy improvement over the NVIDIA cuFFT library on state-of-the-art GPUs for FFT and polynomial multiplication.