Divya Saxena, Suyash Mahar, V. Raychoudhury, Jiannong Cao
{"title":"Scalable, high-speed on-chip-based NDN name forwarding using FPGA","authors":"Divya Saxena, Suyash Mahar, V. Raychoudhury, Jiannong Cao","doi":"10.1145/3288599.3288613","DOIUrl":null,"url":null,"abstract":"Named Data Networking (NDN) is the most promising candidate among the proposed content-based future Internet architectures. In NDN, Forwarding Information Base (FIB) maintains name prefixes and their corresponding outgoing interface(s) and forwards incoming packets by calculating the longest prefix match (LPM) of their content names (CNs). A CN in NDN is of variable-length and is maintained using a hierarchical structure. Therefore, to perform name lookup for packet forwarding at wire speed is a challenging task. However, the use of GPUs can lead to much better lookup speeds than CPU but, they are often limited by the CPU-GPU transfer latencies. In this paper, we exploit the massive parallel processing power of FPGA technology and propose a scalable, high-speed on-chip SRAM-based NDN name forwarding scheme for FIB (OnChip-FIB) using Field-Programmable Gate Arrays (FPGA). OnChip-FIB scales well as the number of prefixes grow, due to low storage complexity and low resource utilization. Extensive simulation results show that the OnChip-FIB scheme can achieve 1.06 μs measured lookup latency with a 26% on-chip block memory usage in a single Xilinx UltraScale FPGA for 50K named dataset.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th International Conference on Distributed Computing and Networking","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3288599.3288613","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Named Data Networking (NDN) is the most promising candidate among the proposed content-based future Internet architectures. In NDN, Forwarding Information Base (FIB) maintains name prefixes and their corresponding outgoing interface(s) and forwards incoming packets by calculating the longest prefix match (LPM) of their content names (CNs). A CN in NDN is of variable-length and is maintained using a hierarchical structure. Therefore, to perform name lookup for packet forwarding at wire speed is a challenging task. However, the use of GPUs can lead to much better lookup speeds than CPU but, they are often limited by the CPU-GPU transfer latencies. In this paper, we exploit the massive parallel processing power of FPGA technology and propose a scalable, high-speed on-chip SRAM-based NDN name forwarding scheme for FIB (OnChip-FIB) using Field-Programmable Gate Arrays (FPGA). OnChip-FIB scales well as the number of prefixes grow, due to low storage complexity and low resource utilization. Extensive simulation results show that the OnChip-FIB scheme can achieve 1.06 μs measured lookup latency with a 26% on-chip block memory usage in a single Xilinx UltraScale FPGA for 50K named dataset.